I0419 23:35:44.307300 8 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0419 23:35:44.307550 8 e2e.go:124] Starting e2e run "b81b16fd-535e-4780-bae5-f734f87c6a06" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1587339343 - Will randomize all specs Will run 275 of 4992 specs Apr 19 23:35:44.359: INFO: >>> kubeConfig: /root/.kube/config Apr 19 23:35:44.363: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 19 23:35:44.388: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 19 23:35:44.421: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 19 23:35:44.421: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 19 23:35:44.421: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 19 23:35:44.428: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 19 23:35:44.428: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 19 23:35:44.428: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Apr 19 23:35:44.429: INFO: kube-apiserver version: v1.17.0 Apr 19 23:35:44.429: INFO: >>> kubeConfig: /root/.kube/config Apr 19 23:35:44.435: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:35:44.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Apr 19 23:35:44.495: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-17156f2a-50ad-4cc4-82c7-b12e0c7956c3 in namespace container-probe-4927 Apr 19 23:35:48.532: INFO: Started pod liveness-17156f2a-50ad-4cc4-82c7-b12e0c7956c3 in namespace container-probe-4927 STEP: checking the pod's current state and verifying that restartCount is present Apr 19 23:35:48.536: INFO: Initial restart count of pod liveness-17156f2a-50ad-4cc4-82c7-b12e0c7956c3 is 0 Apr 19 23:36:10.585: INFO: Restart count of pod container-probe-4927/liveness-17156f2a-50ad-4cc4-82c7-b12e0c7956c3 is now 1 (22.049319394s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:36:10.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4927" for this suite. • [SLOW TEST:26.217 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":35,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:36:10.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 19 23:36:10.724: INFO: Waiting up to 5m0s for pod "downward-api-61ae2fb4-3eba-47e8-806e-b0499a24a187" in namespace "downward-api-6370" to be "Succeeded or Failed" Apr 19 23:36:10.933: INFO: Pod "downward-api-61ae2fb4-3eba-47e8-806e-b0499a24a187": Phase="Pending", Reason="", readiness=false. Elapsed: 208.982015ms Apr 19 23:36:12.937: INFO: Pod "downward-api-61ae2fb4-3eba-47e8-806e-b0499a24a187": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212627854s Apr 19 23:36:14.941: INFO: Pod "downward-api-61ae2fb4-3eba-47e8-806e-b0499a24a187": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.216841287s STEP: Saw pod success Apr 19 23:36:14.941: INFO: Pod "downward-api-61ae2fb4-3eba-47e8-806e-b0499a24a187" satisfied condition "Succeeded or Failed" Apr 19 23:36:14.944: INFO: Trying to get logs from node latest-worker2 pod downward-api-61ae2fb4-3eba-47e8-806e-b0499a24a187 container dapi-container: STEP: delete the pod Apr 19 23:36:14.981: INFO: Waiting for pod downward-api-61ae2fb4-3eba-47e8-806e-b0499a24a187 to disappear Apr 19 23:36:14.999: INFO: Pod downward-api-61ae2fb4-3eba-47e8-806e-b0499a24a187 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:36:14.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6370" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":2,"skipped":75,"failed":0} SS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:36:15.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7152, will wait for the garbage collector to delete the pods Apr 19 23:36:19.154: INFO: Deleting Job.batch foo took: 7.049951ms Apr 19 23:36:19.555: INFO: Terminating Job.batch foo pods took: 400.274015ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:37:03.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7152" for this suite. • [SLOW TEST:48.067 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":3,"skipped":77,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:37:03.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4 Apr 19 23:37:03.154: INFO: Found 0 stateful pods, waiting for 1 Apr 19 23:37:13.159: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 19 23:37:13.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 19 23:37:15.435: INFO: stderr: "I0419 23:37:15.331198 31 log.go:172] (0xc0004e8000) (0xc0005ab860) Create stream\nI0419 23:37:15.331253 31 log.go:172] (0xc0004e8000) (0xc0005ab860) Stream added, broadcasting: 1\nI0419 23:37:15.334377 31 log.go:172] (0xc0004e8000) Reply frame received for 1\nI0419 23:37:15.334464 31 log.go:172] (0xc0004e8000) (0xc000325720) Create stream\nI0419 23:37:15.334501 31 log.go:172] (0xc0004e8000) (0xc000325720) Stream added, broadcasting: 3\nI0419 23:37:15.335486 31 log.go:172] (0xc0004e8000) Reply frame received for 3\nI0419 23:37:15.335519 31 log.go:172] (0xc0004e8000) (0xc00044caa0) Create stream\nI0419 23:37:15.335528 31 log.go:172] (0xc0004e8000) (0xc00044caa0) Stream added, broadcasting: 5\nI0419 23:37:15.336470 31 log.go:172] (0xc0004e8000) Reply frame received for 5\nI0419 23:37:15.401104 31 log.go:172] (0xc0004e8000) Data frame received for 5\nI0419 23:37:15.401294 31 log.go:172] (0xc00044caa0) (5) Data frame handling\nI0419 23:37:15.401316 31 log.go:172] (0xc00044caa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0419 23:37:15.427015 31 log.go:172] (0xc0004e8000) Data frame received for 3\nI0419 23:37:15.427055 31 log.go:172] (0xc000325720) (3) Data frame handling\nI0419 23:37:15.427132 31 log.go:172] (0xc000325720) (3) Data frame sent\nI0419 23:37:15.427283 31 log.go:172] (0xc0004e8000) Data frame received for 3\nI0419 23:37:15.427314 31 log.go:172] (0xc000325720) (3) Data frame handling\nI0419 23:37:15.427344 31 log.go:172] (0xc0004e8000) Data frame received for 5\nI0419 23:37:15.427366 31 log.go:172] (0xc00044caa0) (5) Data frame handling\nI0419 23:37:15.429724 31 log.go:172] (0xc0004e8000) Data frame received for 1\nI0419 23:37:15.429757 31 log.go:172] (0xc0005ab860) (1) Data frame handling\nI0419 23:37:15.429784 31 log.go:172] (0xc0005ab860) (1) Data frame sent\nI0419 23:37:15.429807 31 log.go:172] (0xc0004e8000) (0xc0005ab860) Stream removed, broadcasting: 1\nI0419 23:37:15.429836 31 log.go:172] (0xc0004e8000) Go away received\nI0419 23:37:15.430132 31 log.go:172] (0xc0004e8000) (0xc0005ab860) Stream removed, broadcasting: 1\nI0419 23:37:15.430149 31 log.go:172] (0xc0004e8000) (0xc000325720) Stream removed, broadcasting: 3\nI0419 23:37:15.430155 31 log.go:172] (0xc0004e8000) (0xc00044caa0) Stream removed, broadcasting: 5\n" Apr 19 23:37:15.436: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 19 23:37:15.436: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 19 23:37:15.440: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 19 23:37:25.444: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 19 23:37:25.444: INFO: Waiting for statefulset status.replicas updated to 0 Apr 19 23:37:25.462: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999346s Apr 19 23:37:26.466: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995751591s Apr 19 23:37:27.470: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991147997s Apr 19 23:37:28.474: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.987320762s Apr 19 23:37:29.478: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.982876893s Apr 19 23:37:30.482: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.97916322s Apr 19 23:37:31.487: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.974803292s Apr 19 23:37:32.491: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.970518138s Apr 19 23:37:33.496: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.966046731s Apr 19 23:37:34.501: INFO: Verifying statefulset ss doesn't scale past 1 for another 961.600219ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4 Apr 19 23:37:35.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 19 23:37:35.722: INFO: stderr: "I0419 23:37:35.634568 66 log.go:172] (0xc0003c7a20) (0xc00067f2c0) Create stream\nI0419 23:37:35.634613 66 log.go:172] (0xc0003c7a20) (0xc00067f2c0) Stream added, broadcasting: 1\nI0419 23:37:35.637360 66 log.go:172] (0xc0003c7a20) Reply frame received for 1\nI0419 23:37:35.637398 66 log.go:172] (0xc0003c7a20) (0xc0008dc000) Create stream\nI0419 23:37:35.637411 66 log.go:172] (0xc0003c7a20) (0xc0008dc000) Stream added, broadcasting: 3\nI0419 23:37:35.638275 66 log.go:172] (0xc0003c7a20) Reply frame received for 3\nI0419 23:37:35.638303 66 log.go:172] (0xc0003c7a20) (0xc0005e74a0) Create stream\nI0419 23:37:35.638313 66 log.go:172] (0xc0003c7a20) (0xc0005e74a0) Stream added, broadcasting: 5\nI0419 23:37:35.639146 66 log.go:172] (0xc0003c7a20) Reply frame received for 5\nI0419 23:37:35.716505 66 log.go:172] (0xc0003c7a20) Data frame received for 5\nI0419 23:37:35.716541 66 log.go:172] (0xc0003c7a20) Data frame received for 3\nI0419 23:37:35.716561 66 log.go:172] (0xc0008dc000) (3) Data frame handling\nI0419 23:37:35.716570 66 log.go:172] (0xc0008dc000) (3) Data frame sent\nI0419 23:37:35.716580 66 log.go:172] (0xc0003c7a20) Data frame received for 3\nI0419 23:37:35.716595 66 log.go:172] (0xc0005e74a0) (5) Data frame handling\nI0419 23:37:35.716623 66 log.go:172] (0xc0005e74a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0419 23:37:35.716637 66 log.go:172] (0xc0003c7a20) Data frame received for 5\nI0419 23:37:35.716649 66 log.go:172] (0xc0005e74a0) (5) Data frame handling\nI0419 23:37:35.716663 66 log.go:172] (0xc0008dc000) (3) Data frame handling\nI0419 23:37:35.717914 66 log.go:172] (0xc0003c7a20) Data frame received for 1\nI0419 23:37:35.717932 66 log.go:172] (0xc00067f2c0) (1) Data frame handling\nI0419 23:37:35.717944 66 log.go:172] (0xc00067f2c0) (1) Data frame sent\nI0419 23:37:35.717960 66 log.go:172] (0xc0003c7a20) (0xc00067f2c0) Stream removed, broadcasting: 1\nI0419 23:37:35.717977 66 log.go:172] (0xc0003c7a20) Go away received\nI0419 23:37:35.718255 66 log.go:172] (0xc0003c7a20) (0xc00067f2c0) Stream removed, broadcasting: 1\nI0419 23:37:35.718266 66 log.go:172] (0xc0003c7a20) (0xc0008dc000) Stream removed, broadcasting: 3\nI0419 23:37:35.718272 66 log.go:172] (0xc0003c7a20) (0xc0005e74a0) Stream removed, broadcasting: 5\n" Apr 19 23:37:35.722: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 19 23:37:35.722: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 19 23:37:35.725: INFO: Found 1 stateful pods, waiting for 3 Apr 19 23:37:45.729: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 19 23:37:45.730: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 19 23:37:45.730: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 19 23:37:45.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 19 23:37:45.965: INFO: stderr: "I0419 23:37:45.900399 86 log.go:172] (0xc00058d1e0) (0xc0005c9540) Create stream\nI0419 23:37:45.900500 86 log.go:172] (0xc00058d1e0) (0xc0005c9540) Stream added, broadcasting: 1\nI0419 23:37:45.903819 86 log.go:172] (0xc00058d1e0) Reply frame received for 1\nI0419 23:37:45.903900 86 log.go:172] (0xc00058d1e0) (0xc000557860) Create stream\nI0419 23:37:45.903935 86 log.go:172] (0xc00058d1e0) (0xc000557860) Stream added, broadcasting: 3\nI0419 23:37:45.905247 86 log.go:172] (0xc00058d1e0) Reply frame received for 3\nI0419 23:37:45.905296 86 log.go:172] (0xc00058d1e0) (0xc0005c95e0) Create stream\nI0419 23:37:45.905317 86 log.go:172] (0xc00058d1e0) (0xc0005c95e0) Stream added, broadcasting: 5\nI0419 23:37:45.906214 86 log.go:172] (0xc00058d1e0) Reply frame received for 5\nI0419 23:37:45.959249 86 log.go:172] (0xc00058d1e0) Data frame received for 5\nI0419 23:37:45.959288 86 log.go:172] (0xc0005c95e0) (5) Data frame handling\nI0419 23:37:45.959317 86 log.go:172] (0xc0005c95e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0419 23:37:45.959334 86 log.go:172] (0xc00058d1e0) Data frame received for 5\nI0419 23:37:45.959383 86 log.go:172] (0xc0005c95e0) (5) Data frame handling\nI0419 23:37:45.959426 86 log.go:172] (0xc00058d1e0) Data frame received for 3\nI0419 23:37:45.959455 86 log.go:172] (0xc000557860) (3) Data frame handling\nI0419 23:37:45.959469 86 log.go:172] (0xc000557860) (3) Data frame sent\nI0419 23:37:45.959483 86 log.go:172] (0xc00058d1e0) Data frame received for 3\nI0419 23:37:45.959489 86 log.go:172] (0xc000557860) (3) Data frame handling\nI0419 23:37:45.960564 86 log.go:172] (0xc00058d1e0) Data frame received for 1\nI0419 23:37:45.960579 86 log.go:172] (0xc0005c9540) (1) Data frame handling\nI0419 23:37:45.960592 86 log.go:172] (0xc0005c9540) (1) Data frame sent\nI0419 23:37:45.960604 86 log.go:172] (0xc00058d1e0) (0xc0005c9540) Stream removed, broadcasting: 1\nI0419 23:37:45.960614 86 log.go:172] (0xc00058d1e0) Go away received\nI0419 23:37:45.961021 86 log.go:172] (0xc00058d1e0) (0xc0005c9540) Stream removed, broadcasting: 1\nI0419 23:37:45.961052 86 log.go:172] (0xc00058d1e0) (0xc000557860) Stream removed, broadcasting: 3\nI0419 23:37:45.961063 86 log.go:172] (0xc00058d1e0) (0xc0005c95e0) Stream removed, broadcasting: 5\n" Apr 19 23:37:45.965: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 19 23:37:45.965: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 19 23:37:45.965: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 19 23:37:46.235: INFO: stderr: "I0419 23:37:46.091421 106 log.go:172] (0xc000b5c160) (0xc000b4a0a0) Create stream\nI0419 23:37:46.091501 106 log.go:172] (0xc000b5c160) (0xc000b4a0a0) Stream added, broadcasting: 1\nI0419 23:37:46.098465 106 log.go:172] (0xc000b5c160) Reply frame received for 1\nI0419 23:37:46.098519 106 log.go:172] (0xc000b5c160) (0xc000adc0a0) Create stream\nI0419 23:37:46.098532 106 log.go:172] (0xc000b5c160) (0xc000adc0a0) Stream added, broadcasting: 3\nI0419 23:37:46.099744 106 log.go:172] (0xc000b5c160) Reply frame received for 3\nI0419 23:37:46.099856 106 log.go:172] (0xc000b5c160) (0xc000b4a140) Create stream\nI0419 23:37:46.099929 106 log.go:172] (0xc000b5c160) (0xc000b4a140) Stream added, broadcasting: 5\nI0419 23:37:46.101281 106 log.go:172] (0xc000b5c160) Reply frame received for 5\nI0419 23:37:46.163656 106 log.go:172] (0xc000b5c160) Data frame received for 5\nI0419 23:37:46.163700 106 log.go:172] (0xc000b4a140) (5) Data frame handling\nI0419 23:37:46.163724 106 log.go:172] (0xc000b4a140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0419 23:37:46.228144 106 log.go:172] (0xc000b5c160) Data frame received for 5\nI0419 23:37:46.228199 106 log.go:172] (0xc000b4a140) (5) Data frame handling\nI0419 23:37:46.228245 106 log.go:172] (0xc000b5c160) Data frame received for 3\nI0419 23:37:46.228265 106 log.go:172] (0xc000adc0a0) (3) Data frame handling\nI0419 23:37:46.228282 106 log.go:172] (0xc000adc0a0) (3) Data frame sent\nI0419 23:37:46.228303 106 log.go:172] (0xc000b5c160) Data frame received for 3\nI0419 23:37:46.228313 106 log.go:172] (0xc000adc0a0) (3) Data frame handling\nI0419 23:37:46.229981 106 log.go:172] (0xc000b5c160) Data frame received for 1\nI0419 23:37:46.230013 106 log.go:172] (0xc000b4a0a0) (1) Data frame handling\nI0419 23:37:46.230031 106 log.go:172] (0xc000b4a0a0) (1) Data frame sent\nI0419 23:37:46.230051 106 log.go:172] (0xc000b5c160) (0xc000b4a0a0) Stream removed, broadcasting: 1\nI0419 23:37:46.230076 106 log.go:172] (0xc000b5c160) Go away received\nI0419 23:37:46.230499 106 log.go:172] (0xc000b5c160) (0xc000b4a0a0) Stream removed, broadcasting: 1\nI0419 23:37:46.230529 106 log.go:172] (0xc000b5c160) (0xc000adc0a0) Stream removed, broadcasting: 3\nI0419 23:37:46.230547 106 log.go:172] (0xc000b5c160) (0xc000b4a140) Stream removed, broadcasting: 5\n" Apr 19 23:37:46.235: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 19 23:37:46.235: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 19 23:37:46.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 19 23:37:46.510: INFO: stderr: "I0419 23:37:46.367702 126 log.go:172] (0xc0009c7760) (0xc000af06e0) Create stream\nI0419 23:37:46.367769 126 log.go:172] (0xc0009c7760) (0xc000af06e0) Stream added, broadcasting: 1\nI0419 23:37:46.372920 126 log.go:172] (0xc0009c7760) Reply frame received for 1\nI0419 23:37:46.372970 126 log.go:172] (0xc0009c7760) (0xc0005dd540) Create stream\nI0419 23:37:46.372983 126 log.go:172] (0xc0009c7760) (0xc0005dd540) Stream added, broadcasting: 3\nI0419 23:37:46.374203 126 log.go:172] (0xc0009c7760) Reply frame received for 3\nI0419 23:37:46.374241 126 log.go:172] (0xc0009c7760) (0xc000522960) Create stream\nI0419 23:37:46.374251 126 log.go:172] (0xc0009c7760) (0xc000522960) Stream added, broadcasting: 5\nI0419 23:37:46.375112 126 log.go:172] (0xc0009c7760) Reply frame received for 5\nI0419 23:37:46.453656 126 log.go:172] (0xc0009c7760) Data frame received for 5\nI0419 23:37:46.453679 126 log.go:172] (0xc000522960) (5) Data frame handling\nI0419 23:37:46.453693 126 log.go:172] (0xc000522960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0419 23:37:46.500706 126 log.go:172] (0xc0009c7760) Data frame received for 3\nI0419 23:37:46.500740 126 log.go:172] (0xc0005dd540) (3) Data frame handling\nI0419 23:37:46.500768 126 log.go:172] (0xc0009c7760) Data frame received for 5\nI0419 23:37:46.500801 126 log.go:172] (0xc000522960) (5) Data frame handling\nI0419 23:37:46.500833 126 log.go:172] (0xc0005dd540) (3) Data frame sent\nI0419 23:37:46.501096 126 log.go:172] (0xc0009c7760) Data frame received for 3\nI0419 23:37:46.501276 126 log.go:172] (0xc0005dd540) (3) Data frame handling\nI0419 23:37:46.503417 126 log.go:172] (0xc0009c7760) Data frame received for 1\nI0419 23:37:46.503546 126 log.go:172] (0xc000af06e0) (1) Data frame handling\nI0419 23:37:46.503582 126 log.go:172] (0xc000af06e0) (1) Data frame sent\nI0419 23:37:46.503605 126 log.go:172] (0xc0009c7760) (0xc000af06e0) Stream removed, broadcasting: 1\nI0419 23:37:46.503630 126 log.go:172] (0xc0009c7760) Go away received\nI0419 23:37:46.504365 126 log.go:172] (0xc0009c7760) (0xc000af06e0) Stream removed, broadcasting: 1\nI0419 23:37:46.504392 126 log.go:172] (0xc0009c7760) (0xc0005dd540) Stream removed, broadcasting: 3\nI0419 23:37:46.504409 126 log.go:172] (0xc0009c7760) (0xc000522960) Stream removed, broadcasting: 5\n" Apr 19 23:37:46.510: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 19 23:37:46.510: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 19 23:37:46.510: INFO: Waiting for statefulset status.replicas updated to 0 Apr 19 23:37:46.515: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Apr 19 23:37:56.523: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 19 23:37:56.523: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 19 23:37:56.523: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 19 23:37:56.540: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999399s Apr 19 23:37:57.545: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990320776s Apr 19 23:37:58.551: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985145575s Apr 19 23:37:59.557: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.979621413s Apr 19 23:38:00.562: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.974016532s Apr 19 23:38:01.567: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.968712873s Apr 19 23:38:02.572: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.96360933s Apr 19 23:38:03.577: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.958466947s Apr 19 23:38:04.588: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.953426389s Apr 19 23:38:05.593: INFO: Verifying statefulset ss doesn't scale past 3 for another 942.113407ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4 Apr 19 23:38:06.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 19 23:38:06.807: INFO: stderr: "I0419 23:38:06.724872 147 log.go:172] (0xc00097e000) (0xc00091e000) Create stream\nI0419 23:38:06.724935 147 log.go:172] (0xc00097e000) (0xc00091e000) Stream added, broadcasting: 1\nI0419 23:38:06.728171 147 log.go:172] (0xc00097e000) Reply frame received for 1\nI0419 23:38:06.728216 147 log.go:172] (0xc00097e000) (0xc00091e0a0) Create stream\nI0419 23:38:06.728231 147 log.go:172] (0xc00097e000) (0xc00091e0a0) Stream added, broadcasting: 3\nI0419 23:38:06.729436 147 log.go:172] (0xc00097e000) Reply frame received for 3\nI0419 23:38:06.729471 147 log.go:172] (0xc00097e000) (0xc00091e140) Create stream\nI0419 23:38:06.729480 147 log.go:172] (0xc00097e000) (0xc00091e140) Stream added, broadcasting: 5\nI0419 23:38:06.730524 147 log.go:172] (0xc00097e000) Reply frame received for 5\nI0419 23:38:06.800630 147 log.go:172] (0xc00097e000) Data frame received for 5\nI0419 23:38:06.800659 147 log.go:172] (0xc00091e140) (5) Data frame handling\nI0419 23:38:06.800670 147 log.go:172] (0xc00091e140) (5) Data frame sent\nI0419 23:38:06.800680 147 log.go:172] (0xc00097e000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0419 23:38:06.800698 147 log.go:172] (0xc00097e000) Data frame received for 3\nI0419 23:38:06.800747 147 log.go:172] (0xc00091e0a0) (3) Data frame handling\nI0419 23:38:06.800776 147 log.go:172] (0xc00091e0a0) (3) Data frame sent\nI0419 23:38:06.800796 147 log.go:172] (0xc00097e000) Data frame received for 3\nI0419 23:38:06.800821 147 log.go:172] (0xc00091e0a0) (3) Data frame handling\nI0419 23:38:06.800838 147 log.go:172] (0xc00091e140) (5) Data frame handling\nI0419 23:38:06.802133 147 log.go:172] (0xc00097e000) Data frame received for 1\nI0419 23:38:06.802152 147 log.go:172] (0xc00091e000) (1) Data frame handling\nI0419 23:38:06.802167 147 log.go:172] (0xc00091e000) (1) Data frame sent\nI0419 23:38:06.802184 147 log.go:172] (0xc00097e000) (0xc00091e000) Stream removed, broadcasting: 1\nI0419 23:38:06.802411 147 log.go:172] (0xc00097e000) Go away received\nI0419 23:38:06.802480 147 log.go:172] (0xc00097e000) (0xc00091e000) Stream removed, broadcasting: 1\nI0419 23:38:06.802497 147 log.go:172] (0xc00097e000) (0xc00091e0a0) Stream removed, broadcasting: 3\nI0419 23:38:06.802508 147 log.go:172] (0xc00097e000) (0xc00091e140) Stream removed, broadcasting: 5\n" Apr 19 23:38:06.807: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 19 23:38:06.807: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 19 23:38:06.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 19 23:38:06.990: INFO: stderr: "I0419 23:38:06.924532 168 log.go:172] (0xc0000e8370) (0xc000689860) Create stream\nI0419 23:38:06.924608 168 log.go:172] (0xc0000e8370) (0xc000689860) Stream added, broadcasting: 1\nI0419 23:38:06.928145 168 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0419 23:38:06.928196 168 log.go:172] (0xc0000e8370) (0xc000a1c000) Create stream\nI0419 23:38:06.928209 168 log.go:172] (0xc0000e8370) (0xc000a1c000) Stream added, broadcasting: 3\nI0419 23:38:06.929448 168 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0419 23:38:06.929483 168 log.go:172] (0xc0000e8370) (0xc0009c4000) Create stream\nI0419 23:38:06.929492 168 log.go:172] (0xc0000e8370) (0xc0009c4000) Stream added, broadcasting: 5\nI0419 23:38:06.930449 168 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0419 23:38:06.983824 168 log.go:172] (0xc0000e8370) Data frame received for 5\nI0419 23:38:06.983891 168 log.go:172] (0xc0009c4000) (5) Data frame handling\nI0419 23:38:06.983917 168 log.go:172] (0xc0009c4000) (5) Data frame sent\nI0419 23:38:06.983936 168 log.go:172] (0xc0000e8370) Data frame received for 5\nI0419 23:38:06.983951 168 log.go:172] (0xc0009c4000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0419 23:38:06.983973 168 log.go:172] (0xc0000e8370) Data frame received for 3\nI0419 23:38:06.984006 168 log.go:172] (0xc000a1c000) (3) Data frame handling\nI0419 23:38:06.984025 168 log.go:172] (0xc000a1c000) (3) Data frame sent\nI0419 23:38:06.984034 168 log.go:172] (0xc0000e8370) Data frame received for 3\nI0419 23:38:06.984042 168 log.go:172] (0xc000a1c000) (3) Data frame handling\nI0419 23:38:06.985534 168 log.go:172] (0xc0000e8370) Data frame received for 1\nI0419 23:38:06.985552 168 log.go:172] (0xc000689860) (1) Data frame handling\nI0419 23:38:06.985561 168 log.go:172] (0xc000689860) (1) Data frame sent\nI0419 23:38:06.985571 168 log.go:172] (0xc0000e8370) (0xc000689860) Stream removed, broadcasting: 1\nI0419 23:38:06.985613 168 log.go:172] (0xc0000e8370) Go away received\nI0419 23:38:06.985885 168 log.go:172] (0xc0000e8370) (0xc000689860) Stream removed, broadcasting: 1\nI0419 23:38:06.985901 168 log.go:172] (0xc0000e8370) (0xc000a1c000) Stream removed, broadcasting: 3\nI0419 23:38:06.985909 168 log.go:172] (0xc0000e8370) (0xc0009c4000) Stream removed, broadcasting: 5\n" Apr 19 23:38:06.990: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 19 23:38:06.990: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 19 23:38:06.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 19 23:38:07.192: INFO: stderr: "I0419 23:38:07.123402 190 log.go:172] (0xc000716a50) (0xc0006f9360) Create stream\nI0419 23:38:07.123461 190 log.go:172] (0xc000716a50) (0xc0006f9360) Stream added, broadcasting: 1\nI0419 23:38:07.126156 190 log.go:172] (0xc000716a50) Reply frame received for 1\nI0419 23:38:07.126195 190 log.go:172] (0xc000716a50) (0xc000a62000) Create stream\nI0419 23:38:07.126206 190 log.go:172] (0xc000716a50) (0xc000a62000) Stream added, broadcasting: 3\nI0419 23:38:07.127114 190 log.go:172] (0xc000716a50) Reply frame received for 3\nI0419 23:38:07.127134 190 log.go:172] (0xc000716a50) (0xc000a620a0) Create stream\nI0419 23:38:07.127142 190 log.go:172] (0xc000716a50) (0xc000a620a0) Stream added, broadcasting: 5\nI0419 23:38:07.128257 190 log.go:172] (0xc000716a50) Reply frame received for 5\nI0419 23:38:07.184590 190 log.go:172] (0xc000716a50) Data frame received for 3\nI0419 23:38:07.184613 190 log.go:172] (0xc000a62000) (3) Data frame handling\nI0419 23:38:07.184638 190 log.go:172] (0xc000a62000) (3) Data frame sent\nI0419 23:38:07.184645 190 log.go:172] (0xc000716a50) Data frame received for 3\nI0419 23:38:07.184649 190 log.go:172] (0xc000a62000) (3) Data frame handling\nI0419 23:38:07.184930 190 log.go:172] (0xc000716a50) Data frame received for 5\nI0419 23:38:07.184940 190 log.go:172] (0xc000a620a0) (5) Data frame handling\nI0419 23:38:07.184949 190 log.go:172] (0xc000a620a0) (5) Data frame sent\nI0419 23:38:07.184954 190 log.go:172] (0xc000716a50) Data frame received for 5\nI0419 23:38:07.184958 190 log.go:172] (0xc000a620a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0419 23:38:07.187211 190 log.go:172] (0xc000716a50) Data frame received for 1\nI0419 23:38:07.187325 190 log.go:172] (0xc0006f9360) (1) Data frame handling\nI0419 23:38:07.187377 190 log.go:172] (0xc0006f9360) (1) Data frame sent\nI0419 23:38:07.187391 190 log.go:172] (0xc000716a50) (0xc0006f9360) Stream removed, broadcasting: 1\nI0419 23:38:07.187405 190 log.go:172] (0xc000716a50) Go away received\nI0419 23:38:07.187782 190 log.go:172] (0xc000716a50) (0xc0006f9360) Stream removed, broadcasting: 1\nI0419 23:38:07.187799 190 log.go:172] (0xc000716a50) (0xc000a62000) Stream removed, broadcasting: 3\nI0419 23:38:07.187806 190 log.go:172] (0xc000716a50) (0xc000a620a0) Stream removed, broadcasting: 5\n" Apr 19 23:38:07.192: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 19 23:38:07.192: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 19 23:38:07.192: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 19 23:38:37.207: INFO: Deleting all statefulset in ns statefulset-4 Apr 19 23:38:37.210: INFO: Scaling statefulset ss to 0 Apr 19 23:38:37.218: INFO: Waiting for statefulset status.replicas updated to 0 Apr 19 23:38:37.220: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:38:37.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4" for this suite. • [SLOW TEST:94.166 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":4,"skipped":90,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:38:37.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 19 23:38:37.327: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04d6e481-51f6-4536-a910-9fa6ecc3ec69" in namespace "downward-api-968" to be "Succeeded or Failed" Apr 19 23:38:37.346: INFO: Pod "downwardapi-volume-04d6e481-51f6-4536-a910-9fa6ecc3ec69": Phase="Pending", Reason="", readiness=false. Elapsed: 18.571248ms Apr 19 23:38:39.350: INFO: Pod "downwardapi-volume-04d6e481-51f6-4536-a910-9fa6ecc3ec69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022916302s Apr 19 23:38:41.354: INFO: Pod "downwardapi-volume-04d6e481-51f6-4536-a910-9fa6ecc3ec69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027413625s STEP: Saw pod success Apr 19 23:38:41.355: INFO: Pod "downwardapi-volume-04d6e481-51f6-4536-a910-9fa6ecc3ec69" satisfied condition "Succeeded or Failed" Apr 19 23:38:41.358: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-04d6e481-51f6-4536-a910-9fa6ecc3ec69 container client-container: STEP: delete the pod Apr 19 23:38:41.428: INFO: Waiting for pod downwardapi-volume-04d6e481-51f6-4536-a910-9fa6ecc3ec69 to disappear Apr 19 23:38:41.432: INFO: Pod downwardapi-volume-04d6e481-51f6-4536-a910-9fa6ecc3ec69 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:38:41.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-968" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":93,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:38:41.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Apr 19 23:38:41.482: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 19 23:38:41.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2998' Apr 19 23:38:41.763: INFO: stderr: "" Apr 19 23:38:41.763: INFO: stdout: "service/agnhost-slave created\n" Apr 19 23:38:41.763: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 19 23:38:41.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2998' Apr 19 23:38:41.999: INFO: stderr: "" Apr 19 23:38:41.999: INFO: stdout: "service/agnhost-master created\n" Apr 19 23:38:41.999: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 19 23:38:41.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2998' Apr 19 23:38:42.270: INFO: stderr: "" Apr 19 23:38:42.270: INFO: stdout: "service/frontend created\n" Apr 19 23:38:42.270: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 19 23:38:42.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2998' Apr 19 23:38:42.535: INFO: stderr: "" Apr 19 23:38:42.535: INFO: stdout: "deployment.apps/frontend created\n" Apr 19 23:38:42.535: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 19 23:38:42.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2998' Apr 19 23:38:42.853: INFO: stderr: "" Apr 19 23:38:42.854: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 19 23:38:42.854: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 19 23:38:42.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2998' Apr 19 23:38:43.140: INFO: stderr: "" Apr 19 23:38:43.140: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 19 23:38:43.140: INFO: Waiting for all frontend pods to be Running. Apr 19 23:38:48.190: INFO: Waiting for frontend to serve content. Apr 19 23:38:49.230: INFO: Trying to add a new entry to the guestbook. Apr 19 23:38:49.240: INFO: Verifying that added entry can be retrieved. Apr 19 23:38:49.249: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources Apr 19 23:38:54.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2998' Apr 19 23:38:54.393: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 19 23:38:54.393: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 19 23:38:54.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2998' Apr 19 23:38:54.573: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 19 23:38:54.573: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 19 23:38:54.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2998' Apr 19 23:38:54.694: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 19 23:38:54.694: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 19 23:38:54.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2998' Apr 19 23:38:54.806: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 19 23:38:54.806: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 19 23:38:54.806: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2998' Apr 19 23:38:55.014: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 19 23:38:55.014: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 19 23:38:55.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2998' Apr 19 23:38:55.330: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 19 23:38:55.330: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:38:55.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2998" for this suite. • [SLOW TEST:13.952 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":6,"skipped":150,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:38:55.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-9c55ed61-4303-4644-ac0d-517560e4a227 in namespace container-probe-7793 Apr 19 23:39:01.743: INFO: Started pod busybox-9c55ed61-4303-4644-ac0d-517560e4a227 in namespace container-probe-7793 STEP: checking the pod's current state and verifying that restartCount is present Apr 19 23:39:01.747: INFO: Initial restart count of pod busybox-9c55ed61-4303-4644-ac0d-517560e4a227 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:43:02.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7793" for this suite. • [SLOW TEST:247.223 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":162,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:43:02.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-00276a57-ba92-4609-889c-cf9668b3f221 STEP: Creating secret with name secret-projected-all-test-volume-bc55846e-9528-4730-99bd-309582e450c9 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 19 23:43:02.698: INFO: Waiting up to 5m0s for pod "projected-volume-336c1825-7b84-4d8b-a938-808454fa8dfb" in namespace "projected-79" to be "Succeeded or Failed" Apr 19 23:43:02.714: INFO: Pod "projected-volume-336c1825-7b84-4d8b-a938-808454fa8dfb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.239598ms Apr 19 23:43:04.717: INFO: Pod "projected-volume-336c1825-7b84-4d8b-a938-808454fa8dfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018876135s Apr 19 23:43:06.722: INFO: Pod "projected-volume-336c1825-7b84-4d8b-a938-808454fa8dfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023493279s STEP: Saw pod success Apr 19 23:43:06.722: INFO: Pod "projected-volume-336c1825-7b84-4d8b-a938-808454fa8dfb" satisfied condition "Succeeded or Failed" Apr 19 23:43:06.726: INFO: Trying to get logs from node latest-worker2 pod projected-volume-336c1825-7b84-4d8b-a938-808454fa8dfb container projected-all-volume-test: STEP: delete the pod Apr 19 23:43:06.787: INFO: Waiting for pod projected-volume-336c1825-7b84-4d8b-a938-808454fa8dfb to disappear Apr 19 23:43:06.798: INFO: Pod projected-volume-336c1825-7b84-4d8b-a938-808454fa8dfb no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:43:06.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-79" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":8,"skipped":169,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:43:06.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-4572/configmap-test-3f76bcdd-ed4b-4667-b6e2-1a03b7584549 STEP: Creating a pod to test consume configMaps Apr 19 23:43:06.860: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3e73efd-3c5e-4090-adda-4413a0730fa5" in namespace "configmap-4572" to be "Succeeded or Failed" Apr 19 23:43:06.864: INFO: Pod "pod-configmaps-f3e73efd-3c5e-4090-adda-4413a0730fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.902158ms Apr 19 23:43:08.879: INFO: Pod "pod-configmaps-f3e73efd-3c5e-4090-adda-4413a0730fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019070338s Apr 19 23:43:10.883: INFO: Pod "pod-configmaps-f3e73efd-3c5e-4090-adda-4413a0730fa5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023333425s STEP: Saw pod success Apr 19 23:43:10.883: INFO: Pod "pod-configmaps-f3e73efd-3c5e-4090-adda-4413a0730fa5" satisfied condition "Succeeded or Failed" Apr 19 23:43:10.886: INFO: Trying to get logs from node latest-worker pod pod-configmaps-f3e73efd-3c5e-4090-adda-4413a0730fa5 container env-test: STEP: delete the pod Apr 19 23:43:10.931: INFO: Waiting for pod pod-configmaps-f3e73efd-3c5e-4090-adda-4413a0730fa5 to disappear Apr 19 23:43:10.942: INFO: Pod pod-configmaps-f3e73efd-3c5e-4090-adda-4413a0730fa5 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:43:10.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4572" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":187,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:43:10.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 19 23:43:11.019: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-7b8f41c2-1898-446b-a8c5-eadc6a1ab015" in namespace "security-context-test-2391" to be "Succeeded or Failed" Apr 19 23:43:11.022: INFO: Pod "busybox-privileged-false-7b8f41c2-1898-446b-a8c5-eadc6a1ab015": Phase="Pending", Reason="", readiness=false. Elapsed: 2.74961ms Apr 19 23:43:13.026: INFO: Pod "busybox-privileged-false-7b8f41c2-1898-446b-a8c5-eadc6a1ab015": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006840086s Apr 19 23:43:15.031: INFO: Pod "busybox-privileged-false-7b8f41c2-1898-446b-a8c5-eadc6a1ab015": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011415691s Apr 19 23:43:15.031: INFO: Pod "busybox-privileged-false-7b8f41c2-1898-446b-a8c5-eadc6a1ab015" satisfied condition "Succeeded or Failed" Apr 19 23:43:15.036: INFO: Got logs for pod "busybox-privileged-false-7b8f41c2-1898-446b-a8c5-eadc6a1ab015": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:43:15.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2391" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":202,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:43:15.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:43:15.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1531" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":11,"skipped":202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:43:15.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 19 23:43:15.270: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 19 23:43:15.286: INFO: Waiting for terminating namespaces to be deleted... Apr 19 23:43:15.288: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 19 23:43:15.293: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 19 23:43:15.293: INFO: Container kindnet-cni ready: true, restart count 0 Apr 19 23:43:15.293: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 19 23:43:15.293: INFO: Container kube-proxy ready: true, restart count 0 Apr 19 23:43:15.293: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 19 23:43:15.298: INFO: busybox-privileged-false-7b8f41c2-1898-446b-a8c5-eadc6a1ab015 from security-context-test-2391 started at 2020-04-19 23:43:11 +0000 UTC (1 container statuses recorded) Apr 19 23:43:15.298: INFO: Container busybox-privileged-false-7b8f41c2-1898-446b-a8c5-eadc6a1ab015 ready: false, restart count 0 Apr 19 23:43:15.298: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 19 23:43:15.298: INFO: Container kindnet-cni ready: true, restart count 0 Apr 19 23:43:15.298: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 19 23:43:15.298: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Apr 19 23:43:15.389: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Apr 19 23:43:15.389: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Apr 19 23:43:15.389: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Apr 19 23:43:15.389: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 19 23:43:15.389: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 Apr 19 23:43:15.396: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-7edd7aa3-8314-4346-8947-e866dd4ee3d1.16075d2242cfce58], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6383/filler-pod-7edd7aa3-8314-4346-8947-e866dd4ee3d1 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-7edd7aa3-8314-4346-8947-e866dd4ee3d1.16075d228ca8f8b0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7edd7aa3-8314-4346-8947-e866dd4ee3d1.16075d22c772f222], Reason = [Created], Message = [Created container filler-pod-7edd7aa3-8314-4346-8947-e866dd4ee3d1] STEP: Considering event: Type = [Normal], Name = [filler-pod-7edd7aa3-8314-4346-8947-e866dd4ee3d1.16075d22e1a7c1fe], Reason = [Started], Message = [Started container filler-pod-7edd7aa3-8314-4346-8947-e866dd4ee3d1] STEP: Considering event: Type = [Normal], Name = [filler-pod-cfcd31f9-bd09-498f-9c96-ec3091be5bd9.16075d224509ef7d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6383/filler-pod-cfcd31f9-bd09-498f-9c96-ec3091be5bd9 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-cfcd31f9-bd09-498f-9c96-ec3091be5bd9.16075d22bf03cd04], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-cfcd31f9-bd09-498f-9c96-ec3091be5bd9.16075d22f0adaa2b], Reason = [Created], Message = [Created container filler-pod-cfcd31f9-bd09-498f-9c96-ec3091be5bd9] STEP: Considering event: Type = [Normal], Name = [filler-pod-cfcd31f9-bd09-498f-9c96-ec3091be5bd9.16075d2300152bc2], Reason = [Started], Message = [Started container filler-pod-cfcd31f9-bd09-498f-9c96-ec3091be5bd9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16075d23346a62a3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:43:20.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6383" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:5.378 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":12,"skipped":235,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:43:20.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-6b5b3043-e675-4e0f-a2a9-59490baf9b17 STEP: Creating a pod to test consume secrets Apr 19 23:43:20.638: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4d5e89ef-1b12-4045-80ee-7caa1c700086" in namespace "projected-2948" to be "Succeeded or Failed" Apr 19 23:43:20.651: INFO: Pod "pod-projected-secrets-4d5e89ef-1b12-4045-80ee-7caa1c700086": Phase="Pending", Reason="", readiness=false. Elapsed: 13.013495ms Apr 19 23:43:22.656: INFO: Pod "pod-projected-secrets-4d5e89ef-1b12-4045-80ee-7caa1c700086": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017997727s Apr 19 23:43:24.660: INFO: Pod "pod-projected-secrets-4d5e89ef-1b12-4045-80ee-7caa1c700086": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022250799s STEP: Saw pod success Apr 19 23:43:24.660: INFO: Pod "pod-projected-secrets-4d5e89ef-1b12-4045-80ee-7caa1c700086" satisfied condition "Succeeded or Failed" Apr 19 23:43:24.664: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-4d5e89ef-1b12-4045-80ee-7caa1c700086 container secret-volume-test: STEP: delete the pod Apr 19 23:43:24.687: INFO: Waiting for pod pod-projected-secrets-4d5e89ef-1b12-4045-80ee-7caa1c700086 to disappear Apr 19 23:43:24.703: INFO: Pod pod-projected-secrets-4d5e89ef-1b12-4045-80ee-7caa1c700086 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:43:24.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2948" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":282,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:43:24.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5067 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5067 I0419 23:43:24.918037 8 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5067, replica count: 2 I0419 23:43:27.968493 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0419 23:43:30.968739 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 19 23:43:30.968: INFO: Creating new exec pod Apr 19 23:43:35.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5067 execpod8mc8n -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 19 23:43:36.229: INFO: stderr: "I0419 23:43:36.121394 451 log.go:172] (0xc0000e8420) (0xc0004e2be0) Create stream\nI0419 23:43:36.121444 451 log.go:172] (0xc0000e8420) (0xc0004e2be0) Stream added, broadcasting: 1\nI0419 23:43:36.125677 451 log.go:172] (0xc0000e8420) Reply frame received for 1\nI0419 23:43:36.125739 451 log.go:172] (0xc0000e8420) (0xc000bb2000) Create stream\nI0419 23:43:36.125769 451 log.go:172] (0xc0000e8420) (0xc000bb2000) Stream added, broadcasting: 3\nI0419 23:43:36.127018 451 log.go:172] (0xc0000e8420) Reply frame received for 3\nI0419 23:43:36.127067 451 log.go:172] (0xc0000e8420) (0xc0008d0000) Create stream\nI0419 23:43:36.127083 451 log.go:172] (0xc0000e8420) (0xc0008d0000) Stream added, broadcasting: 5\nI0419 23:43:36.128039 451 log.go:172] (0xc0000e8420) Reply frame received for 5\nI0419 23:43:36.221963 451 log.go:172] (0xc0000e8420) Data frame received for 5\nI0419 23:43:36.222013 451 log.go:172] (0xc0008d0000) (5) Data frame handling\nI0419 23:43:36.222044 451 log.go:172] (0xc0000e8420) Data frame received for 3\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0419 23:43:36.222071 451 log.go:172] (0xc000bb2000) (3) Data frame handling\nI0419 23:43:36.222101 451 log.go:172] (0xc0008d0000) (5) Data frame sent\nI0419 23:43:36.222125 451 log.go:172] (0xc0000e8420) Data frame received for 5\nI0419 23:43:36.222140 451 log.go:172] (0xc0008d0000) (5) Data frame handling\nI0419 23:43:36.224046 451 log.go:172] (0xc0000e8420) Data frame received for 1\nI0419 23:43:36.224066 451 log.go:172] (0xc0004e2be0) (1) Data frame handling\nI0419 23:43:36.224076 451 log.go:172] (0xc0004e2be0) (1) Data frame sent\nI0419 23:43:36.224090 451 log.go:172] (0xc0000e8420) (0xc0004e2be0) Stream removed, broadcasting: 1\nI0419 23:43:36.224109 451 log.go:172] (0xc0000e8420) Go away received\nI0419 23:43:36.224463 451 log.go:172] (0xc0000e8420) (0xc0004e2be0) Stream removed, broadcasting: 1\nI0419 23:43:36.224484 451 log.go:172] (0xc0000e8420) (0xc000bb2000) Stream removed, broadcasting: 3\nI0419 23:43:36.224500 451 log.go:172] (0xc0000e8420) (0xc0008d0000) Stream removed, broadcasting: 5\n" Apr 19 23:43:36.229: INFO: stdout: "" Apr 19 23:43:36.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5067 execpod8mc8n -- /bin/sh -x -c nc -zv -t -w 2 10.96.71.227 80' Apr 19 23:43:36.439: INFO: stderr: "I0419 23:43:36.368533 471 log.go:172] (0xc0009000b0) (0xc00064d540) Create stream\nI0419 23:43:36.368584 471 log.go:172] (0xc0009000b0) (0xc00064d540) Stream added, broadcasting: 1\nI0419 23:43:36.371427 471 log.go:172] (0xc0009000b0) Reply frame received for 1\nI0419 23:43:36.371456 471 log.go:172] (0xc0009000b0) (0xc0008ce000) Create stream\nI0419 23:43:36.371464 471 log.go:172] (0xc0009000b0) (0xc0008ce000) Stream added, broadcasting: 3\nI0419 23:43:36.372519 471 log.go:172] (0xc0009000b0) Reply frame received for 3\nI0419 23:43:36.372562 471 log.go:172] (0xc0009000b0) (0xc0003c6a00) Create stream\nI0419 23:43:36.372576 471 log.go:172] (0xc0009000b0) (0xc0003c6a00) Stream added, broadcasting: 5\nI0419 23:43:36.373406 471 log.go:172] (0xc0009000b0) Reply frame received for 5\nI0419 23:43:36.432512 471 log.go:172] (0xc0009000b0) Data frame received for 3\nI0419 23:43:36.432540 471 log.go:172] (0xc0008ce000) (3) Data frame handling\nI0419 23:43:36.432556 471 log.go:172] (0xc0009000b0) Data frame received for 5\nI0419 23:43:36.432561 471 log.go:172] (0xc0003c6a00) (5) Data frame handling\nI0419 23:43:36.432569 471 log.go:172] (0xc0003c6a00) (5) Data frame sent\nI0419 23:43:36.432574 471 log.go:172] (0xc0009000b0) Data frame received for 5\nI0419 23:43:36.432579 471 log.go:172] (0xc0003c6a00) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.71.227 80\nConnection to 10.96.71.227 80 port [tcp/http] succeeded!\nI0419 23:43:36.434024 471 log.go:172] (0xc0009000b0) Data frame received for 1\nI0419 23:43:36.434039 471 log.go:172] (0xc00064d540) (1) Data frame handling\nI0419 23:43:36.434065 471 log.go:172] (0xc00064d540) (1) Data frame sent\nI0419 23:43:36.434083 471 log.go:172] (0xc0009000b0) (0xc00064d540) Stream removed, broadcasting: 1\nI0419 23:43:36.434166 471 log.go:172] (0xc0009000b0) Go away received\nI0419 23:43:36.434363 471 log.go:172] (0xc0009000b0) (0xc00064d540) Stream removed, broadcasting: 1\nI0419 23:43:36.434374 471 log.go:172] (0xc0009000b0) (0xc0008ce000) Stream removed, broadcasting: 3\nI0419 23:43:36.434380 471 log.go:172] (0xc0009000b0) (0xc0003c6a00) Stream removed, broadcasting: 5\n" Apr 19 23:43:36.439: INFO: stdout: "" Apr 19 23:43:36.439: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:43:36.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5067" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:11.751 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":14,"skipped":289,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:43:36.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-65b80e6b-9b28-48d6-8e7e-3cccfe376c6f in namespace container-probe-1921 Apr 19 23:43:40.587: INFO: Started pod test-webserver-65b80e6b-9b28-48d6-8e7e-3cccfe376c6f in namespace container-probe-1921 STEP: checking the pod's current state and verifying that restartCount is present Apr 19 23:43:40.590: INFO: Initial restart count of pod test-webserver-65b80e6b-9b28-48d6-8e7e-3cccfe376c6f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:47:41.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1921" for this suite. • [SLOW TEST:244.955 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":294,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:47:41.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 19 23:47:46.098: INFO: Successfully updated pod "pod-update-activedeadlineseconds-429e4b01-af88-42b2-bc18-408c1bdd27c8" Apr 19 23:47:46.098: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-429e4b01-af88-42b2-bc18-408c1bdd27c8" in namespace "pods-165" to be "terminated due to deadline exceeded" Apr 19 23:47:46.124: INFO: Pod "pod-update-activedeadlineseconds-429e4b01-af88-42b2-bc18-408c1bdd27c8": Phase="Running", Reason="", readiness=true. Elapsed: 26.006597ms Apr 19 23:47:48.128: INFO: Pod "pod-update-activedeadlineseconds-429e4b01-af88-42b2-bc18-408c1bdd27c8": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.02995035s Apr 19 23:47:48.128: INFO: Pod "pod-update-activedeadlineseconds-429e4b01-af88-42b2-bc18-408c1bdd27c8" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:47:48.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-165" for this suite. • [SLOW TEST:6.700 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":302,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:47:48.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-09bc092e-291a-4b71-bc8c-9ab8074d93ce STEP: Creating a pod to test consume configMaps Apr 19 23:47:48.241: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-be62ce91-6210-4cf4-bbf7-d88ca7e553f0" in namespace "projected-5067" to be "Succeeded or Failed" Apr 19 23:47:48.264: INFO: Pod "pod-projected-configmaps-be62ce91-6210-4cf4-bbf7-d88ca7e553f0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.879133ms Apr 19 23:47:50.296: INFO: Pod "pod-projected-configmaps-be62ce91-6210-4cf4-bbf7-d88ca7e553f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054832747s Apr 19 23:47:52.301: INFO: Pod "pod-projected-configmaps-be62ce91-6210-4cf4-bbf7-d88ca7e553f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059276435s STEP: Saw pod success Apr 19 23:47:52.301: INFO: Pod "pod-projected-configmaps-be62ce91-6210-4cf4-bbf7-d88ca7e553f0" satisfied condition "Succeeded or Failed" Apr 19 23:47:52.304: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-be62ce91-6210-4cf4-bbf7-d88ca7e553f0 container projected-configmap-volume-test: STEP: delete the pod Apr 19 23:47:52.349: INFO: Waiting for pod pod-projected-configmaps-be62ce91-6210-4cf4-bbf7-d88ca7e553f0 to disappear Apr 19 23:47:52.361: INFO: Pod pod-projected-configmaps-be62ce91-6210-4cf4-bbf7-d88ca7e553f0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:47:52.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5067" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":308,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:47:52.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 19 23:47:52.459: INFO: Waiting up to 5m0s for pod "downwardapi-volume-607a4209-dd64-4c3d-afd0-db0e14d75a8d" in namespace "projected-3515" to be "Succeeded or Failed" Apr 19 23:47:52.463: INFO: Pod "downwardapi-volume-607a4209-dd64-4c3d-afd0-db0e14d75a8d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.462817ms Apr 19 23:47:54.469: INFO: Pod "downwardapi-volume-607a4209-dd64-4c3d-afd0-db0e14d75a8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010039924s Apr 19 23:47:56.488: INFO: Pod "downwardapi-volume-607a4209-dd64-4c3d-afd0-db0e14d75a8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028690501s STEP: Saw pod success Apr 19 23:47:56.488: INFO: Pod "downwardapi-volume-607a4209-dd64-4c3d-afd0-db0e14d75a8d" satisfied condition "Succeeded or Failed" Apr 19 23:47:56.492: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-607a4209-dd64-4c3d-afd0-db0e14d75a8d container client-container: STEP: delete the pod Apr 19 23:47:56.519: INFO: Waiting for pod downwardapi-volume-607a4209-dd64-4c3d-afd0-db0e14d75a8d to disappear Apr 19 23:47:56.530: INFO: Pod downwardapi-volume-607a4209-dd64-4c3d-afd0-db0e14d75a8d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:47:56.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3515" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":310,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:47:56.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 19 23:47:56.614: INFO: Waiting up to 5m0s for pod "pod-4bb2f33f-fc3b-4e40-9500-6cfef0da35b9" in namespace "emptydir-8106" to be "Succeeded or Failed" Apr 19 23:47:56.636: INFO: Pod "pod-4bb2f33f-fc3b-4e40-9500-6cfef0da35b9": Phase="Pending", Reason="", readiness=false. Elapsed: 21.197309ms Apr 19 23:47:58.639: INFO: Pod "pod-4bb2f33f-fc3b-4e40-9500-6cfef0da35b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024659924s Apr 19 23:48:00.643: INFO: Pod "pod-4bb2f33f-fc3b-4e40-9500-6cfef0da35b9": Phase="Running", Reason="", readiness=true. Elapsed: 4.028624021s Apr 19 23:48:02.648: INFO: Pod "pod-4bb2f33f-fc3b-4e40-9500-6cfef0da35b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033141298s STEP: Saw pod success Apr 19 23:48:02.648: INFO: Pod "pod-4bb2f33f-fc3b-4e40-9500-6cfef0da35b9" satisfied condition "Succeeded or Failed" Apr 19 23:48:02.651: INFO: Trying to get logs from node latest-worker pod pod-4bb2f33f-fc3b-4e40-9500-6cfef0da35b9 container test-container: STEP: delete the pod Apr 19 23:48:02.685: INFO: Waiting for pod pod-4bb2f33f-fc3b-4e40-9500-6cfef0da35b9 to disappear Apr 19 23:48:02.687: INFO: Pod pod-4bb2f33f-fc3b-4e40-9500-6cfef0da35b9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:48:02.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8106" for this suite. • [SLOW TEST:6.156 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:48:02.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-54158b36-954a-49ed-9380-1a45041fc985 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-54158b36-954a-49ed-9380-1a45041fc985 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:48:08.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-232" for this suite. • [SLOW TEST:6.161 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:48:08.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 19 23:48:08.933: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5fb99991-3a52-45ea-9598-1e3e86840208" in namespace "downward-api-4978" to be "Succeeded or Failed" Apr 19 23:48:08.936: INFO: Pod "downwardapi-volume-5fb99991-3a52-45ea-9598-1e3e86840208": Phase="Pending", Reason="", readiness=false. Elapsed: 3.686698ms Apr 19 23:48:10.940: INFO: Pod "downwardapi-volume-5fb99991-3a52-45ea-9598-1e3e86840208": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00760736s Apr 19 23:48:12.944: INFO: Pod "downwardapi-volume-5fb99991-3a52-45ea-9598-1e3e86840208": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011547174s STEP: Saw pod success Apr 19 23:48:12.944: INFO: Pod "downwardapi-volume-5fb99991-3a52-45ea-9598-1e3e86840208" satisfied condition "Succeeded or Failed" Apr 19 23:48:12.948: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-5fb99991-3a52-45ea-9598-1e3e86840208 container client-container: STEP: delete the pod Apr 19 23:48:12.968: INFO: Waiting for pod downwardapi-volume-5fb99991-3a52-45ea-9598-1e3e86840208 to disappear Apr 19 23:48:12.972: INFO: Pod downwardapi-volume-5fb99991-3a52-45ea-9598-1e3e86840208 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:48:12.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4978" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:48:12.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 19 23:48:13.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6811' Apr 19 23:48:16.383: INFO: stderr: "" Apr 19 23:48:16.383: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 19 23:48:16.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6811' Apr 19 23:48:16.626: INFO: stderr: "" Apr 19 23:48:16.626: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 19 23:48:17.630: INFO: Selector matched 1 pods for map[app:agnhost] Apr 19 23:48:17.630: INFO: Found 0 / 1 Apr 19 23:48:18.645: INFO: Selector matched 1 pods for map[app:agnhost] Apr 19 23:48:18.645: INFO: Found 0 / 1 Apr 19 23:48:19.631: INFO: Selector matched 1 pods for map[app:agnhost] Apr 19 23:48:19.631: INFO: Found 0 / 1 Apr 19 23:48:20.630: INFO: Selector matched 1 pods for map[app:agnhost] Apr 19 23:48:20.630: INFO: Found 1 / 1 Apr 19 23:48:20.630: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 19 23:48:20.633: INFO: Selector matched 1 pods for map[app:agnhost] Apr 19 23:48:20.633: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 19 23:48:20.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-8t4cb --namespace=kubectl-6811' Apr 19 23:48:20.743: INFO: stderr: "" Apr 19 23:48:20.743: INFO: stdout: "Name: agnhost-master-8t4cb\nNamespace: kubectl-6811\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Sun, 19 Apr 2020 23:48:16 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.41\nIPs:\n IP: 10.244.1.41\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://7686415cd58cd0d919f7b2a1dbce01c6797a3a8afc099cacb1dad1ead5c03a2c\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 19 Apr 2020 23:48:18 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-dmv6s (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-dmv6s:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-dmv6s\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-6811/agnhost-master-8t4cb to latest-worker2\n Normal Pulled 3s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 2s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 2s kubelet, latest-worker2 Started container agnhost-master\n" Apr 19 23:48:20.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-6811' Apr 19 23:48:20.863: INFO: stderr: "" Apr 19 23:48:20.864: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6811\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-8t4cb\n" Apr 19 23:48:20.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-6811' Apr 19 23:48:20.978: INFO: stderr: "" Apr 19 23:48:20.978: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6811\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.177.41\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.41:6379\nSession Affinity: None\nEvents: \n" Apr 19 23:48:20.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Apr 19 23:48:21.106: INFO: stderr: "" Apr 19 23:48:21.106: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Sun, 19 Apr 2020 23:48:16 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 19 Apr 2020 23:45:24 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 19 Apr 2020 23:45:24 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 19 Apr 2020 23:45:24 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 19 Apr 2020 23:45:24 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 35d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 35d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 35d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 35d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 35d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 35d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 35d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 35d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 35d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 19 23:48:21.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-6811' Apr 19 23:48:21.203: INFO: stderr: "" Apr 19 23:48:21.203: INFO: stdout: "Name: kubectl-6811\nLabels: e2e-framework=kubectl\n e2e-run=b81b16fd-535e-4780-bae5-f734f87c6a06\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:48:21.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6811" for this suite. • [SLOW TEST:8.231 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":22,"skipped":462,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:48:21.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 19 23:48:21.242: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 19 23:48:24.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1408 create -f -' Apr 19 23:48:28.421: INFO: stderr: "" Apr 19 23:48:28.421: INFO: stdout: "e2e-test-crd-publish-openapi-5892-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 19 23:48:28.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1408 delete e2e-test-crd-publish-openapi-5892-crds test-foo' Apr 19 23:48:28.523: INFO: stderr: "" Apr 19 23:48:28.523: INFO: stdout: "e2e-test-crd-publish-openapi-5892-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 19 23:48:28.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1408 apply -f -' Apr 19 23:48:28.809: INFO: stderr: "" Apr 19 23:48:28.809: INFO: stdout: "e2e-test-crd-publish-openapi-5892-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 19 23:48:28.809: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1408 delete e2e-test-crd-publish-openapi-5892-crds test-foo' Apr 19 23:48:28.919: INFO: stderr: "" Apr 19 23:48:28.919: INFO: stdout: "e2e-test-crd-publish-openapi-5892-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 19 23:48:28.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1408 create -f -' Apr 19 23:48:29.161: INFO: rc: 1 Apr 19 23:48:29.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1408 apply -f -' Apr 19 23:48:29.415: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 19 23:48:29.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1408 create -f -' Apr 19 23:48:29.630: INFO: rc: 1 Apr 19 23:48:29.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1408 apply -f -' Apr 19 23:48:29.854: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 19 23:48:29.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5892-crds' Apr 19 23:48:30.093: INFO: stderr: "" Apr 19 23:48:30.093: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5892-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 19 23:48:30.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5892-crds.metadata' Apr 19 23:48:30.321: INFO: stderr: "" Apr 19 23:48:30.321: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5892-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 19 23:48:30.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5892-crds.spec' Apr 19 23:48:30.540: INFO: stderr: "" Apr 19 23:48:30.540: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5892-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 19 23:48:30.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5892-crds.spec.bars' Apr 19 23:48:30.786: INFO: stderr: "" Apr 19 23:48:30.786: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5892-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 19 23:48:30.786: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5892-crds.spec.bars2' Apr 19 23:48:31.034: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:48:33.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1408" for this suite. • [SLOW TEST:12.728 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":23,"skipped":465,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:48:33.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 19 23:48:34.414: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 19 23:48:36.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722936914, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722936914, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722936914, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722936914, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 19 23:48:39.440: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:48:39.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9248" for this suite. STEP: Destroying namespace "webhook-9248-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.629 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":24,"skipped":470,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:48:39.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-ad0d3012-35a0-475d-b8e2-1ce44b36f0d7 STEP: Creating configMap with name cm-test-opt-upd-e3c4df69-99d1-4694-bc8e-825a77dc965f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ad0d3012-35a0-475d-b8e2-1ce44b36f0d7 STEP: Updating configmap cm-test-opt-upd-e3c4df69-99d1-4694-bc8e-825a77dc965f STEP: Creating configMap with name cm-test-opt-create-fda9ab2a-be71-4f8c-8002-b1394bba2984 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:48:47.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4484" for this suite. • [SLOW TEST:8.191 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":473,"failed":0} SSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:48:47.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Apr 19 23:48:48.390: INFO: created pod pod-service-account-defaultsa Apr 19 23:48:48.390: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 19 23:48:48.413: INFO: created pod pod-service-account-mountsa Apr 19 23:48:48.413: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 19 23:48:48.435: INFO: created pod pod-service-account-nomountsa Apr 19 23:48:48.435: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 19 23:48:48.507: INFO: created pod pod-service-account-defaultsa-mountspec Apr 19 23:48:48.507: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 19 23:48:48.519: INFO: created pod pod-service-account-mountsa-mountspec Apr 19 23:48:48.519: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 19 23:48:48.587: INFO: created pod pod-service-account-nomountsa-mountspec Apr 19 23:48:48.587: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 19 23:48:48.693: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 19 23:48:48.693: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 19 23:48:48.732: INFO: created pod pod-service-account-mountsa-nomountspec Apr 19 23:48:48.732: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 19 23:48:48.752: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 19 23:48:48.752: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:48:48.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3782" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":26,"skipped":479,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:48:48.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Apr 19 23:48:49.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-4393 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 19 23:48:49.182: INFO: stderr: "" Apr 19 23:48:49.182: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Apr 19 23:48:49.182: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 19 23:48:49.182: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4393" to be "running and ready, or succeeded" Apr 19 23:48:49.189: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.227436ms Apr 19 23:48:51.238: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055822368s Apr 19 23:48:53.558: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.376064441s Apr 19 23:48:55.886: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.703771134s Apr 19 23:48:57.934: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.752114846s Apr 19 23:48:59.970: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 10.788278367s Apr 19 23:48:59.970: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 19 23:48:59.970: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 19 23:48:59.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4393' Apr 19 23:49:00.089: INFO: stderr: "" Apr 19 23:49:00.089: INFO: stdout: "I0419 23:48:58.628361 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/rhp 571\nI0419 23:48:58.828552 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/lvb7 401\nI0419 23:48:59.028503 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/fw9g 304\nI0419 23:48:59.228562 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/wn67 488\nI0419 23:48:59.428584 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/w4w 597\nI0419 23:48:59.628523 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/cv6 599\nI0419 23:48:59.828534 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/8h5 580\nI0419 23:49:00.028534 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/wfj 527\n" STEP: limiting log lines Apr 19 23:49:00.089: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4393 --tail=1' Apr 19 23:49:00.350: INFO: stderr: "" Apr 19 23:49:00.350: INFO: stdout: "I0419 23:49:00.228516 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/h8m 402\n" Apr 19 23:49:00.350: INFO: got output "I0419 23:49:00.228516 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/h8m 402\n" STEP: limiting log bytes Apr 19 23:49:00.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4393 --limit-bytes=1' Apr 19 23:49:00.476: INFO: stderr: "" Apr 19 23:49:00.476: INFO: stdout: "I" Apr 19 23:49:00.476: INFO: got output "I" STEP: exposing timestamps Apr 19 23:49:00.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4393 --tail=1 --timestamps' Apr 19 23:49:00.640: INFO: stderr: "" Apr 19 23:49:00.640: INFO: stdout: "2020-04-19T23:49:00.628683025Z I0419 23:49:00.628519 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/mtx 529\n" Apr 19 23:49:00.640: INFO: got output "2020-04-19T23:49:00.628683025Z I0419 23:49:00.628519 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/mtx 529\n" STEP: restricting to a time range Apr 19 23:49:03.140: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4393 --since=1s' Apr 19 23:49:03.256: INFO: stderr: "" Apr 19 23:49:03.256: INFO: stdout: "I0419 23:49:02.428523 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/hl5 230\nI0419 23:49:02.628581 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/zj5c 248\nI0419 23:49:02.828543 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/2j7 244\nI0419 23:49:03.028547 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/66jt 585\nI0419 23:49:03.228521 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/7n68 343\n" Apr 19 23:49:03.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4393 --since=24h' Apr 19 23:49:03.375: INFO: stderr: "" Apr 19 23:49:03.375: INFO: stdout: "I0419 23:48:58.628361 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/rhp 571\nI0419 23:48:58.828552 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/lvb7 401\nI0419 23:48:59.028503 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/fw9g 304\nI0419 23:48:59.228562 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/wn67 488\nI0419 23:48:59.428584 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/w4w 597\nI0419 23:48:59.628523 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/cv6 599\nI0419 23:48:59.828534 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/8h5 580\nI0419 23:49:00.028534 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/wfj 527\nI0419 23:49:00.228516 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/h8m 402\nI0419 23:49:00.428567 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/rfgn 231\nI0419 23:49:00.628519 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/mtx 529\nI0419 23:49:00.828551 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/4km 347\nI0419 23:49:01.028525 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/xhc 490\nI0419 23:49:01.228575 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/jw52 347\nI0419 23:49:01.428523 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/pdhq 288\nI0419 23:49:01.628534 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/6j7w 436\nI0419 23:49:01.828555 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/9t8t 571\nI0419 23:49:02.028545 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/cm2j 595\nI0419 23:49:02.228565 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/xzpj 353\nI0419 23:49:02.428523 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/hl5 230\nI0419 23:49:02.628581 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/zj5c 248\nI0419 23:49:02.828543 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/2j7 244\nI0419 23:49:03.028547 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/66jt 585\nI0419 23:49:03.228521 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/7n68 343\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Apr 19 23:49:03.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4393' Apr 19 23:49:12.822: INFO: stderr: "" Apr 19 23:49:12.822: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:49:12.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4393" for this suite. • [SLOW TEST:23.947 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":27,"skipped":490,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:49:12.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1484.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-1484.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1484.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-1484.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1484.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1484.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-1484.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1484.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-1484.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1484.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 19 23:49:18.970: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:18.973: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:18.976: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:18.979: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:18.987: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:18.989: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:18.992: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:18.994: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:18.999: INFO: Lookups using dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1484.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1484.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local jessie_udp@dns-test-service-2.dns-1484.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1484.svc.cluster.local] Apr 19 23:49:24.005: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:24.008: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:24.011: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:24.014: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:24.023: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:24.026: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:24.029: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:24.031: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:24.037: INFO: Lookups using dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1484.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1484.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local jessie_udp@dns-test-service-2.dns-1484.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1484.svc.cluster.local] Apr 19 23:49:29.004: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:29.008: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:29.011: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:29.014: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:29.024: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:29.027: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:29.030: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:29.034: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:29.039: INFO: Lookups using dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1484.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1484.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local jessie_udp@dns-test-service-2.dns-1484.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1484.svc.cluster.local] Apr 19 23:49:34.005: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:34.009: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:34.012: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:34.015: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:34.025: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:34.028: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:34.031: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:34.034: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:34.040: INFO: Lookups using dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1484.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1484.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local jessie_udp@dns-test-service-2.dns-1484.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1484.svc.cluster.local] Apr 19 23:49:39.005: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:39.009: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:39.012: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:39.015: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:39.025: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:39.028: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:39.030: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:39.033: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:39.039: INFO: Lookups using dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1484.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1484.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local jessie_udp@dns-test-service-2.dns-1484.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1484.svc.cluster.local] Apr 19 23:49:44.003: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:44.005: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:44.007: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:44.009: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:44.015: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:44.016: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:44.018: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:44.020: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1484.svc.cluster.local from pod dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1: the server could not find the requested resource (get pods dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1) Apr 19 23:49:44.025: INFO: Lookups using dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1484.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1484.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1484.svc.cluster.local jessie_udp@dns-test-service-2.dns-1484.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1484.svc.cluster.local] Apr 19 23:49:49.035: INFO: DNS probes using dns-1484/dns-test-54731f27-ce9a-4401-9b8e-e5012d046cd1 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:49:49.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1484" for this suite. • [SLOW TEST:37.077 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":28,"skipped":549,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:49:49.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Apr 19 23:49:50.072: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix978426711/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:49:50.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6044" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":29,"skipped":579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:49:50.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 19 23:49:50.259: INFO: Waiting up to 5m0s for pod "pod-c9fd47dd-be63-4b41-bcf6-b5042e612e43" in namespace "emptydir-9785" to be "Succeeded or Failed" Apr 19 23:49:50.287: INFO: Pod "pod-c9fd47dd-be63-4b41-bcf6-b5042e612e43": Phase="Pending", Reason="", readiness=false. Elapsed: 28.21813ms Apr 19 23:49:52.377: INFO: Pod "pod-c9fd47dd-be63-4b41-bcf6-b5042e612e43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118180462s Apr 19 23:49:54.381: INFO: Pod "pod-c9fd47dd-be63-4b41-bcf6-b5042e612e43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122167096s STEP: Saw pod success Apr 19 23:49:54.381: INFO: Pod "pod-c9fd47dd-be63-4b41-bcf6-b5042e612e43" satisfied condition "Succeeded or Failed" Apr 19 23:49:54.384: INFO: Trying to get logs from node latest-worker2 pod pod-c9fd47dd-be63-4b41-bcf6-b5042e612e43 container test-container: STEP: delete the pod Apr 19 23:49:54.448: INFO: Waiting for pod pod-c9fd47dd-be63-4b41-bcf6-b5042e612e43 to disappear Apr 19 23:49:54.460: INFO: Pod pod-c9fd47dd-be63-4b41-bcf6-b5042e612e43 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:49:54.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9785" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":606,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:49:54.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 19 23:49:54.516: INFO: Waiting up to 5m0s for pod "pod-d1b080eb-7e16-489a-83e8-3cea1d7bff42" in namespace "emptydir-1985" to be "Succeeded or Failed" Apr 19 23:49:54.520: INFO: Pod "pod-d1b080eb-7e16-489a-83e8-3cea1d7bff42": Phase="Pending", Reason="", readiness=false. Elapsed: 3.773011ms Apr 19 23:49:56.524: INFO: Pod "pod-d1b080eb-7e16-489a-83e8-3cea1d7bff42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007516967s Apr 19 23:49:58.527: INFO: Pod "pod-d1b080eb-7e16-489a-83e8-3cea1d7bff42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010696601s STEP: Saw pod success Apr 19 23:49:58.527: INFO: Pod "pod-d1b080eb-7e16-489a-83e8-3cea1d7bff42" satisfied condition "Succeeded or Failed" Apr 19 23:49:58.529: INFO: Trying to get logs from node latest-worker pod pod-d1b080eb-7e16-489a-83e8-3cea1d7bff42 container test-container: STEP: delete the pod Apr 19 23:49:58.568: INFO: Waiting for pod pod-d1b080eb-7e16-489a-83e8-3cea1d7bff42 to disappear Apr 19 23:49:58.574: INFO: Pod pod-d1b080eb-7e16-489a-83e8-3cea1d7bff42 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:49:58.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1985" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":615,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:49:58.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 19 23:49:59.048: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 19 23:50:01.056: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722936999, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722936999, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722936999, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722936999, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 19 23:50:04.102: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 19 23:50:04.123: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:50:04.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4002" for this suite. STEP: Destroying namespace "webhook-4002-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.685 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":32,"skipped":626,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:50:04.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3772.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3772.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3772.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3772.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 19 23:50:10.368: INFO: File jessie_udp@dns-test-service-3.dns-3772.svc.cluster.local from pod dns-3772/dns-test-ab8dbe62-b4db-4921-a9ff-b4da1316793b contains '' instead of 'foo.example.com.' Apr 19 23:50:10.368: INFO: Lookups using dns-3772/dns-test-ab8dbe62-b4db-4921-a9ff-b4da1316793b failed for: [jessie_udp@dns-test-service-3.dns-3772.svc.cluster.local] Apr 19 23:50:15.377: INFO: DNS probes using dns-test-ab8dbe62-b4db-4921-a9ff-b4da1316793b succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3772.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3772.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3772.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3772.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 19 23:50:23.479: INFO: File wheezy_udp@dns-test-service-3.dns-3772.svc.cluster.local from pod dns-3772/dns-test-37e10369-bb68-4501-bf4e-a05fe4416f7e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 19 23:50:23.483: INFO: File jessie_udp@dns-test-service-3.dns-3772.svc.cluster.local from pod dns-3772/dns-test-37e10369-bb68-4501-bf4e-a05fe4416f7e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 19 23:50:23.483: INFO: Lookups using dns-3772/dns-test-37e10369-bb68-4501-bf4e-a05fe4416f7e failed for: [wheezy_udp@dns-test-service-3.dns-3772.svc.cluster.local jessie_udp@dns-test-service-3.dns-3772.svc.cluster.local] Apr 19 23:50:28.488: INFO: File wheezy_udp@dns-test-service-3.dns-3772.svc.cluster.local from pod dns-3772/dns-test-37e10369-bb68-4501-bf4e-a05fe4416f7e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 19 23:50:28.492: INFO: File jessie_udp@dns-test-service-3.dns-3772.svc.cluster.local from pod dns-3772/dns-test-37e10369-bb68-4501-bf4e-a05fe4416f7e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 19 23:50:28.492: INFO: Lookups using dns-3772/dns-test-37e10369-bb68-4501-bf4e-a05fe4416f7e failed for: [wheezy_udp@dns-test-service-3.dns-3772.svc.cluster.local jessie_udp@dns-test-service-3.dns-3772.svc.cluster.local] Apr 19 23:50:33.488: INFO: File wheezy_udp@dns-test-service-3.dns-3772.svc.cluster.local from pod dns-3772/dns-test-37e10369-bb68-4501-bf4e-a05fe4416f7e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 19 23:50:33.493: INFO: File jessie_udp@dns-test-service-3.dns-3772.svc.cluster.local from pod dns-3772/dns-test-37e10369-bb68-4501-bf4e-a05fe4416f7e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 19 23:50:33.493: INFO: Lookups using dns-3772/dns-test-37e10369-bb68-4501-bf4e-a05fe4416f7e failed for: [wheezy_udp@dns-test-service-3.dns-3772.svc.cluster.local jessie_udp@dns-test-service-3.dns-3772.svc.cluster.local] Apr 19 23:50:38.488: INFO: File wheezy_udp@dns-test-service-3.dns-3772.svc.cluster.local from pod dns-3772/dns-test-37e10369-bb68-4501-bf4e-a05fe4416f7e contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 19 23:50:38.492: INFO: Lookups using dns-3772/dns-test-37e10369-bb68-4501-bf4e-a05fe4416f7e failed for: [wheezy_udp@dns-test-service-3.dns-3772.svc.cluster.local] Apr 19 23:50:43.492: INFO: DNS probes using dns-test-37e10369-bb68-4501-bf4e-a05fe4416f7e succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3772.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3772.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3772.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3772.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 19 23:50:50.203: INFO: DNS probes using dns-test-2878481b-73b4-4196-ac33-9870dd51c7ad succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:50:50.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3772" for this suite. • [SLOW TEST:46.191 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":33,"skipped":627,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:50:50.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-71c6770d-3b1b-4d97-bc2d-081f7c053cd5 in namespace container-probe-7924 Apr 19 23:50:54.865: INFO: Started pod liveness-71c6770d-3b1b-4d97-bc2d-081f7c053cd5 in namespace container-probe-7924 STEP: checking the pod's current state and verifying that restartCount is present Apr 19 23:50:54.868: INFO: Initial restart count of pod liveness-71c6770d-3b1b-4d97-bc2d-081f7c053cd5 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:54:55.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7924" for this suite. • [SLOW TEST:245.028 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":668,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:54:55.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 19 23:54:55.854: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ca47ff6c-9b6d-4cbb-9cfd-ce131d364bc1" in namespace "security-context-test-3676" to be "Succeeded or Failed" Apr 19 23:54:55.867: INFO: Pod "busybox-readonly-false-ca47ff6c-9b6d-4cbb-9cfd-ce131d364bc1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.502458ms Apr 19 23:54:57.888: INFO: Pod "busybox-readonly-false-ca47ff6c-9b6d-4cbb-9cfd-ce131d364bc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033755304s Apr 19 23:54:59.919: INFO: Pod "busybox-readonly-false-ca47ff6c-9b6d-4cbb-9cfd-ce131d364bc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064996647s Apr 19 23:54:59.919: INFO: Pod "busybox-readonly-false-ca47ff6c-9b6d-4cbb-9cfd-ce131d364bc1" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:54:59.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3676" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":673,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:54:59.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-786428d7-bc7e-444c-8a2f-fbc3d1431416 STEP: Creating a pod to test consume configMaps Apr 19 23:54:59.996: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a59e4ac4-5501-4adb-9115-e71d8287a7f0" in namespace "projected-2458" to be "Succeeded or Failed" Apr 19 23:55:00.068: INFO: Pod "pod-projected-configmaps-a59e4ac4-5501-4adb-9115-e71d8287a7f0": Phase="Pending", Reason="", readiness=false. Elapsed: 72.550816ms Apr 19 23:55:02.086: INFO: Pod "pod-projected-configmaps-a59e4ac4-5501-4adb-9115-e71d8287a7f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090584295s Apr 19 23:55:04.091: INFO: Pod "pod-projected-configmaps-a59e4ac4-5501-4adb-9115-e71d8287a7f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094717099s STEP: Saw pod success Apr 19 23:55:04.091: INFO: Pod "pod-projected-configmaps-a59e4ac4-5501-4adb-9115-e71d8287a7f0" satisfied condition "Succeeded or Failed" Apr 19 23:55:04.093: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-a59e4ac4-5501-4adb-9115-e71d8287a7f0 container projected-configmap-volume-test: STEP: delete the pod Apr 19 23:55:04.150: INFO: Waiting for pod pod-projected-configmaps-a59e4ac4-5501-4adb-9115-e71d8287a7f0 to disappear Apr 19 23:55:04.200: INFO: Pod pod-projected-configmaps-a59e4ac4-5501-4adb-9115-e71d8287a7f0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:55:04.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2458" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":675,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:55:04.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-ff1b9b3d-7891-46b5-9789-71e1a0c96ffe STEP: Creating a pod to test consume configMaps Apr 19 23:55:04.288: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-767bd9c3-934c-40a1-873c-8b3fb4501e4e" in namespace "projected-157" to be "Succeeded or Failed" Apr 19 23:55:04.292: INFO: Pod "pod-projected-configmaps-767bd9c3-934c-40a1-873c-8b3fb4501e4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144439ms Apr 19 23:55:06.296: INFO: Pod "pod-projected-configmaps-767bd9c3-934c-40a1-873c-8b3fb4501e4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00771967s Apr 19 23:55:08.300: INFO: Pod "pod-projected-configmaps-767bd9c3-934c-40a1-873c-8b3fb4501e4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011838587s STEP: Saw pod success Apr 19 23:55:08.300: INFO: Pod "pod-projected-configmaps-767bd9c3-934c-40a1-873c-8b3fb4501e4e" satisfied condition "Succeeded or Failed" Apr 19 23:55:08.303: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-767bd9c3-934c-40a1-873c-8b3fb4501e4e container projected-configmap-volume-test: STEP: delete the pod Apr 19 23:55:08.341: INFO: Waiting for pod pod-projected-configmaps-767bd9c3-934c-40a1-873c-8b3fb4501e4e to disappear Apr 19 23:55:08.416: INFO: Pod pod-projected-configmaps-767bd9c3-934c-40a1-873c-8b3fb4501e4e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:55:08.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-157" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":675,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:55:08.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 19 23:55:08.455: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:55:17.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5082" for this suite. • [SLOW TEST:9.108 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":38,"skipped":684,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:55:17.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 19 23:55:17.619: INFO: Waiting up to 5m0s for pod "downwardapi-volume-551cd378-5d07-4d11-80bb-43ef78ab0e11" in namespace "downward-api-5452" to be "Succeeded or Failed" Apr 19 23:55:17.623: INFO: Pod "downwardapi-volume-551cd378-5d07-4d11-80bb-43ef78ab0e11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234361ms Apr 19 23:55:19.644: INFO: Pod "downwardapi-volume-551cd378-5d07-4d11-80bb-43ef78ab0e11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024605625s Apr 19 23:55:21.648: INFO: Pod "downwardapi-volume-551cd378-5d07-4d11-80bb-43ef78ab0e11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028866274s STEP: Saw pod success Apr 19 23:55:21.648: INFO: Pod "downwardapi-volume-551cd378-5d07-4d11-80bb-43ef78ab0e11" satisfied condition "Succeeded or Failed" Apr 19 23:55:21.651: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-551cd378-5d07-4d11-80bb-43ef78ab0e11 container client-container: STEP: delete the pod Apr 19 23:55:21.671: INFO: Waiting for pod downwardapi-volume-551cd378-5d07-4d11-80bb-43ef78ab0e11 to disappear Apr 19 23:55:21.709: INFO: Pod downwardapi-volume-551cd378-5d07-4d11-80bb-43ef78ab0e11 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:55:21.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5452" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":732,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:55:21.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 19 23:55:25.873: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:55:25.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9683" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":765,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:55:25.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-4a7ba2ad-9505-4f48-8b00-0dbb1d8e295f STEP: Creating a pod to test consume configMaps Apr 19 23:55:25.966: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c305bd5c-f2a8-4c27-9fd1-c9c1b0721e0f" in namespace "projected-7322" to be "Succeeded or Failed" Apr 19 23:55:25.999: INFO: Pod "pod-projected-configmaps-c305bd5c-f2a8-4c27-9fd1-c9c1b0721e0f": Phase="Pending", Reason="", readiness=false. Elapsed: 33.165019ms Apr 19 23:55:28.002: INFO: Pod "pod-projected-configmaps-c305bd5c-f2a8-4c27-9fd1-c9c1b0721e0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035953836s Apr 19 23:55:30.006: INFO: Pod "pod-projected-configmaps-c305bd5c-f2a8-4c27-9fd1-c9c1b0721e0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039909332s STEP: Saw pod success Apr 19 23:55:30.006: INFO: Pod "pod-projected-configmaps-c305bd5c-f2a8-4c27-9fd1-c9c1b0721e0f" satisfied condition "Succeeded or Failed" Apr 19 23:55:30.008: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-c305bd5c-f2a8-4c27-9fd1-c9c1b0721e0f container projected-configmap-volume-test: STEP: delete the pod Apr 19 23:55:30.025: INFO: Waiting for pod pod-projected-configmaps-c305bd5c-f2a8-4c27-9fd1-c9c1b0721e0f to disappear Apr 19 23:55:30.069: INFO: Pod pod-projected-configmaps-c305bd5c-f2a8-4c27-9fd1-c9c1b0721e0f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:55:30.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7322" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":782,"failed":0} ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:55:30.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:55:34.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-655" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":42,"skipped":782,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:55:34.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:55:39.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3956" for this suite. • [SLOW TEST:5.308 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":43,"skipped":809,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:55:39.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 19 23:55:40.053: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 19 23:55:42.062: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937340, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937340, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937340, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937340, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 19 23:55:45.096: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:55:45.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9677" for this suite. STEP: Destroying namespace "webhook-9677-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.813 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":44,"skipped":831,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:55:45.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0419 23:55:46.501546 8 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 19 23:55:46.501: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:55:46.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9808" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":45,"skipped":838,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:55:46.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:56:46.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-536" for this suite. • [SLOW TEST:60.103 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":874,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:56:46.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-471222e9-3905-4457-893d-1ee90a0f0384 STEP: Creating a pod to test consume secrets Apr 19 23:56:46.743: INFO: Waiting up to 5m0s for pod "pod-secrets-e49116a5-6066-449d-a7aa-32f3fbcc21ef" in namespace "secrets-6862" to be "Succeeded or Failed" Apr 19 23:56:46.795: INFO: Pod "pod-secrets-e49116a5-6066-449d-a7aa-32f3fbcc21ef": Phase="Pending", Reason="", readiness=false. Elapsed: 51.850527ms Apr 19 23:56:48.799: INFO: Pod "pod-secrets-e49116a5-6066-449d-a7aa-32f3fbcc21ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055572813s Apr 19 23:56:50.803: INFO: Pod "pod-secrets-e49116a5-6066-449d-a7aa-32f3fbcc21ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059733222s STEP: Saw pod success Apr 19 23:56:50.803: INFO: Pod "pod-secrets-e49116a5-6066-449d-a7aa-32f3fbcc21ef" satisfied condition "Succeeded or Failed" Apr 19 23:56:50.807: INFO: Trying to get logs from node latest-worker pod pod-secrets-e49116a5-6066-449d-a7aa-32f3fbcc21ef container secret-volume-test: STEP: delete the pod Apr 19 23:56:50.837: INFO: Waiting for pod pod-secrets-e49116a5-6066-449d-a7aa-32f3fbcc21ef to disappear Apr 19 23:56:50.845: INFO: Pod pod-secrets-e49116a5-6066-449d-a7aa-32f3fbcc21ef no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:56:50.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6862" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":898,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:56:50.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:57:04.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6991" for this suite. • [SLOW TEST:13.183 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":48,"skipped":904,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:57:04.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4720 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4720 STEP: creating replication controller externalsvc in namespace services-4720 I0419 23:57:04.225856 8 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4720, replica count: 2 I0419 23:57:07.276355 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0419 23:57:10.276642 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 19 23:57:10.320: INFO: Creating new exec pod Apr 19 23:57:14.335: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4720 execpodk928c -- /bin/sh -x -c nslookup nodeport-service' Apr 19 23:57:14.573: INFO: stderr: "I0419 23:57:14.475231 1128 log.go:172] (0xc00003a9a0) (0xc00080f400) Create stream\nI0419 23:57:14.475283 1128 log.go:172] (0xc00003a9a0) (0xc00080f400) Stream added, broadcasting: 1\nI0419 23:57:14.478440 1128 log.go:172] (0xc00003a9a0) Reply frame received for 1\nI0419 23:57:14.478490 1128 log.go:172] (0xc00003a9a0) (0xc00097a000) Create stream\nI0419 23:57:14.478503 1128 log.go:172] (0xc00003a9a0) (0xc00097a000) Stream added, broadcasting: 3\nI0419 23:57:14.479487 1128 log.go:172] (0xc00003a9a0) Reply frame received for 3\nI0419 23:57:14.479520 1128 log.go:172] (0xc00003a9a0) (0xc00080f4a0) Create stream\nI0419 23:57:14.479532 1128 log.go:172] (0xc00003a9a0) (0xc00080f4a0) Stream added, broadcasting: 5\nI0419 23:57:14.480390 1128 log.go:172] (0xc00003a9a0) Reply frame received for 5\nI0419 23:57:14.556885 1128 log.go:172] (0xc00003a9a0) Data frame received for 5\nI0419 23:57:14.556935 1128 log.go:172] (0xc00080f4a0) (5) Data frame handling\nI0419 23:57:14.556968 1128 log.go:172] (0xc00080f4a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0419 23:57:14.564273 1128 log.go:172] (0xc00003a9a0) Data frame received for 3\nI0419 23:57:14.564318 1128 log.go:172] (0xc00097a000) (3) Data frame handling\nI0419 23:57:14.564349 1128 log.go:172] (0xc00097a000) (3) Data frame sent\nI0419 23:57:14.565692 1128 log.go:172] (0xc00003a9a0) Data frame received for 3\nI0419 23:57:14.565726 1128 log.go:172] (0xc00097a000) (3) Data frame handling\nI0419 23:57:14.565757 1128 log.go:172] (0xc00097a000) (3) Data frame sent\nI0419 23:57:14.565888 1128 log.go:172] (0xc00003a9a0) Data frame received for 3\nI0419 23:57:14.565914 1128 log.go:172] (0xc00097a000) (3) Data frame handling\nI0419 23:57:14.565955 1128 log.go:172] (0xc00003a9a0) Data frame received for 5\nI0419 23:57:14.565980 1128 log.go:172] (0xc00080f4a0) (5) Data frame handling\nI0419 23:57:14.568023 1128 log.go:172] (0xc00003a9a0) Data frame received for 1\nI0419 23:57:14.568051 1128 log.go:172] (0xc00080f400) (1) Data frame handling\nI0419 23:57:14.568072 1128 log.go:172] (0xc00080f400) (1) Data frame sent\nI0419 23:57:14.568095 1128 log.go:172] (0xc00003a9a0) (0xc00080f400) Stream removed, broadcasting: 1\nI0419 23:57:14.568120 1128 log.go:172] (0xc00003a9a0) Go away received\nI0419 23:57:14.568566 1128 log.go:172] (0xc00003a9a0) (0xc00080f400) Stream removed, broadcasting: 1\nI0419 23:57:14.568598 1128 log.go:172] (0xc00003a9a0) (0xc00097a000) Stream removed, broadcasting: 3\nI0419 23:57:14.568611 1128 log.go:172] (0xc00003a9a0) (0xc00080f4a0) Stream removed, broadcasting: 5\n" Apr 19 23:57:14.573: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4720.svc.cluster.local\tcanonical name = externalsvc.services-4720.svc.cluster.local.\nName:\texternalsvc.services-4720.svc.cluster.local\nAddress: 10.96.172.67\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4720, will wait for the garbage collector to delete the pods Apr 19 23:57:14.632: INFO: Deleting ReplicationController externalsvc took: 4.669002ms Apr 19 23:57:14.932: INFO: Terminating ReplicationController externalsvc pods took: 300.228028ms Apr 19 23:57:19.873: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:57:19.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4720" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:15.889 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":49,"skipped":909,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:57:19.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-7975db70-053e-41eb-aa7c-689ebb31c359 STEP: Creating a pod to test consume configMaps Apr 19 23:57:19.992: INFO: Waiting up to 5m0s for pod "pod-configmaps-d7e46250-1473-4453-9113-caebd59fd002" in namespace "configmap-5438" to be "Succeeded or Failed" Apr 19 23:57:19.997: INFO: Pod "pod-configmaps-d7e46250-1473-4453-9113-caebd59fd002": Phase="Pending", Reason="", readiness=false. Elapsed: 4.486767ms Apr 19 23:57:22.000: INFO: Pod "pod-configmaps-d7e46250-1473-4453-9113-caebd59fd002": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007632087s Apr 19 23:57:24.005: INFO: Pod "pod-configmaps-d7e46250-1473-4453-9113-caebd59fd002": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012376363s STEP: Saw pod success Apr 19 23:57:24.005: INFO: Pod "pod-configmaps-d7e46250-1473-4453-9113-caebd59fd002" satisfied condition "Succeeded or Failed" Apr 19 23:57:24.008: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d7e46250-1473-4453-9113-caebd59fd002 container configmap-volume-test: STEP: delete the pod Apr 19 23:57:24.054: INFO: Waiting for pod pod-configmaps-d7e46250-1473-4453-9113-caebd59fd002 to disappear Apr 19 23:57:24.063: INFO: Pod pod-configmaps-d7e46250-1473-4453-9113-caebd59fd002 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:57:24.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5438" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":912,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:57:24.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 19 23:57:28.664: INFO: Successfully updated pod "labelsupdate5cc1f7b4-2731-442f-8acb-3b42310102c0" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:57:32.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9839" for this suite. • [SLOW TEST:8.658 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":942,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:57:32.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 19 23:57:32.774: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 19 23:57:35.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9621 create -f -' Apr 19 23:57:39.099: INFO: stderr: "" Apr 19 23:57:39.099: INFO: stdout: "e2e-test-crd-publish-openapi-1018-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 19 23:57:39.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9621 delete e2e-test-crd-publish-openapi-1018-crds test-cr' Apr 19 23:57:39.221: INFO: stderr: "" Apr 19 23:57:39.221: INFO: stdout: "e2e-test-crd-publish-openapi-1018-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 19 23:57:39.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9621 apply -f -' Apr 19 23:57:39.537: INFO: stderr: "" Apr 19 23:57:39.537: INFO: stdout: "e2e-test-crd-publish-openapi-1018-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 19 23:57:39.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9621 delete e2e-test-crd-publish-openapi-1018-crds test-cr' Apr 19 23:57:39.642: INFO: stderr: "" Apr 19 23:57:39.642: INFO: stdout: "e2e-test-crd-publish-openapi-1018-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 19 23:57:39.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1018-crds' Apr 19 23:57:39.847: INFO: stderr: "" Apr 19 23:57:39.847: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1018-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:57:41.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9621" for this suite. • [SLOW TEST:9.042 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":52,"skipped":959,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:57:41.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 19 23:57:41.848: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d894a695-7119-4d3b-92a6-65148788331e" in namespace "downward-api-1118" to be "Succeeded or Failed" Apr 19 23:57:41.852: INFO: Pod "downwardapi-volume-d894a695-7119-4d3b-92a6-65148788331e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.670665ms Apr 19 23:57:43.855: INFO: Pod "downwardapi-volume-d894a695-7119-4d3b-92a6-65148788331e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006346311s Apr 19 23:57:45.861: INFO: Pod "downwardapi-volume-d894a695-7119-4d3b-92a6-65148788331e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012272929s STEP: Saw pod success Apr 19 23:57:45.861: INFO: Pod "downwardapi-volume-d894a695-7119-4d3b-92a6-65148788331e" satisfied condition "Succeeded or Failed" Apr 19 23:57:45.863: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d894a695-7119-4d3b-92a6-65148788331e container client-container: STEP: delete the pod Apr 19 23:57:45.892: INFO: Waiting for pod downwardapi-volume-d894a695-7119-4d3b-92a6-65148788331e to disappear Apr 19 23:57:45.912: INFO: Pod downwardapi-volume-d894a695-7119-4d3b-92a6-65148788331e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:57:45.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1118" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":966,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:57:45.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 19 23:57:45.961: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:57:46.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9787" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":54,"skipped":994,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:57:46.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:57:47.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-584" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":55,"skipped":1013,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:57:47.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 19 23:57:47.245: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:57:47.286: INFO: Number of nodes with available pods: 0 Apr 19 23:57:47.286: INFO: Node latest-worker is running more than one daemon pod Apr 19 23:57:48.291: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:57:48.294: INFO: Number of nodes with available pods: 0 Apr 19 23:57:48.294: INFO: Node latest-worker is running more than one daemon pod Apr 19 23:57:49.291: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:57:49.294: INFO: Number of nodes with available pods: 0 Apr 19 23:57:49.294: INFO: Node latest-worker is running more than one daemon pod Apr 19 23:57:50.290: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:57:50.293: INFO: Number of nodes with available pods: 0 Apr 19 23:57:50.293: INFO: Node latest-worker is running more than one daemon pod Apr 19 23:57:51.292: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:57:51.295: INFO: Number of nodes with available pods: 1 Apr 19 23:57:51.295: INFO: Node latest-worker is running more than one daemon pod Apr 19 23:57:52.290: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:57:52.293: INFO: Number of nodes with available pods: 2 Apr 19 23:57:52.293: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 19 23:57:52.351: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:57:52.356: INFO: Number of nodes with available pods: 1 Apr 19 23:57:52.356: INFO: Node latest-worker is running more than one daemon pod Apr 19 23:57:53.361: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:57:53.364: INFO: Number of nodes with available pods: 1 Apr 19 23:57:53.364: INFO: Node latest-worker is running more than one daemon pod Apr 19 23:57:54.361: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:57:54.364: INFO: Number of nodes with available pods: 1 Apr 19 23:57:54.364: INFO: Node latest-worker is running more than one daemon pod Apr 19 23:57:55.363: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:57:55.366: INFO: Number of nodes with available pods: 2 Apr 19 23:57:55.366: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-932, will wait for the garbage collector to delete the pods Apr 19 23:57:55.429: INFO: Deleting DaemonSet.extensions daemon-set took: 6.181477ms Apr 19 23:57:55.730: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.282905ms Apr 19 23:58:02.969: INFO: Number of nodes with available pods: 0 Apr 19 23:58:02.969: INFO: Number of running nodes: 0, number of available pods: 0 Apr 19 23:58:02.975: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-932/daemonsets","resourceVersion":"9456325"},"items":null} Apr 19 23:58:02.981: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-932/pods","resourceVersion":"9456326"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:58:02.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-932" for this suite. • [SLOW TEST:15.848 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":56,"skipped":1038,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:58:02.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 19 23:58:03.743: INFO: PodSpec: initContainers in spec.initContainers Apr 19 23:58:50.473: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-21965d71-179e-43e6-9ac3-0ded69ce2aa7", GenerateName:"", Namespace:"init-container-3766", SelfLink:"/api/v1/namespaces/init-container-3766/pods/pod-init-21965d71-179e-43e6-9ac3-0ded69ce2aa7", UID:"ebea2c3c-a072-4e78-b73c-b385138fd073", ResourceVersion:"9456521", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722937483, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"743218359"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nh8dl", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004d6d980), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nh8dl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nh8dl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nh8dl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002f66998), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000f6abd0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f66a60)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f66a80)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002f66a88), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002f66a8c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937483, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937483, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937483, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937483, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.2.151", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.151"}}, StartTime:(*v1.Time)(0xc002869400), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f6acb0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f6ad90)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://19887e683d29991c277ccae9ea5fbe53e9d6021209a57dbad2b757976c213cc6", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002869440), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002869420), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002f66b1f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:58:50.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3766" for this suite. • [SLOW TEST:47.505 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":57,"skipped":1054,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:58:50.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-3f479842-7a2b-4dc9-ab03-27bfb151bb2c STEP: Creating a pod to test consume secrets Apr 19 23:58:50.690: INFO: Waiting up to 5m0s for pod "pod-secrets-22482ca3-c6dd-4551-bb62-df9fc0636832" in namespace "secrets-3362" to be "Succeeded or Failed" Apr 19 23:58:50.695: INFO: Pod "pod-secrets-22482ca3-c6dd-4551-bb62-df9fc0636832": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150329ms Apr 19 23:58:52.699: INFO: Pod "pod-secrets-22482ca3-c6dd-4551-bb62-df9fc0636832": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00825633s Apr 19 23:58:54.703: INFO: Pod "pod-secrets-22482ca3-c6dd-4551-bb62-df9fc0636832": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012603895s STEP: Saw pod success Apr 19 23:58:54.703: INFO: Pod "pod-secrets-22482ca3-c6dd-4551-bb62-df9fc0636832" satisfied condition "Succeeded or Failed" Apr 19 23:58:54.706: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-22482ca3-c6dd-4551-bb62-df9fc0636832 container secret-volume-test: STEP: delete the pod Apr 19 23:58:54.727: INFO: Waiting for pod pod-secrets-22482ca3-c6dd-4551-bb62-df9fc0636832 to disappear Apr 19 23:58:54.736: INFO: Pod pod-secrets-22482ca3-c6dd-4551-bb62-df9fc0636832 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:58:54.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3362" for this suite. STEP: Destroying namespace "secret-namespace-2617" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":58,"skipped":1066,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:58:54.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 19 23:58:55.198: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 19 23:58:57.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937535, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937535, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937535, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937535, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 19 23:59:00.249: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:59:00.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3192" for this suite. STEP: Destroying namespace "webhook-3192-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.038 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":59,"skipped":1089,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:59:00.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-4e77ec6b-3938-4496-b76a-82f160526984 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:59:00.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3315" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":60,"skipped":1100,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:59:00.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 19 23:59:09.083: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 19 23:59:09.088: INFO: Pod pod-with-prestop-http-hook still exists Apr 19 23:59:11.088: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 19 23:59:11.092: INFO: Pod pod-with-prestop-http-hook still exists Apr 19 23:59:13.088: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 19 23:59:13.092: INFO: Pod pod-with-prestop-http-hook still exists Apr 19 23:59:15.088: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 19 23:59:15.092: INFO: Pod pod-with-prestop-http-hook still exists Apr 19 23:59:17.088: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 19 23:59:17.092: INFO: Pod pod-with-prestop-http-hook still exists Apr 19 23:59:19.088: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 19 23:59:19.092: INFO: Pod pod-with-prestop-http-hook still exists Apr 19 23:59:21.088: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 19 23:59:21.092: INFO: Pod pod-with-prestop-http-hook still exists Apr 19 23:59:23.088: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 19 23:59:23.092: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:59:23.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1516" for this suite. • [SLOW TEST:22.166 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":1107,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:59:23.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 19 23:59:23.698: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 19 23:59:25.709: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937563, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937563, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937563, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937563, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 19 23:59:27.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937563, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937563, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937563, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937563, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 19 23:59:30.767: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 19 23:59:30.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:59:31.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7463" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:8.940 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":62,"skipped":1121,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:59:32.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 19 23:59:32.153: INFO: Create a RollingUpdate DaemonSet Apr 19 23:59:32.156: INFO: Check that daemon pods launch on every node of the cluster Apr 19 23:59:32.179: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:59:32.207: INFO: Number of nodes with available pods: 0 Apr 19 23:59:32.207: INFO: Node latest-worker is running more than one daemon pod Apr 19 23:59:33.235: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:59:33.238: INFO: Number of nodes with available pods: 0 Apr 19 23:59:33.238: INFO: Node latest-worker is running more than one daemon pod Apr 19 23:59:34.213: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:59:34.217: INFO: Number of nodes with available pods: 0 Apr 19 23:59:34.217: INFO: Node latest-worker is running more than one daemon pod Apr 19 23:59:35.221: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:59:35.226: INFO: Number of nodes with available pods: 1 Apr 19 23:59:35.226: INFO: Node latest-worker is running more than one daemon pod Apr 19 23:59:36.213: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:59:36.216: INFO: Number of nodes with available pods: 2 Apr 19 23:59:36.216: INFO: Number of running nodes: 2, number of available pods: 2 Apr 19 23:59:36.216: INFO: Update the DaemonSet to trigger a rollout Apr 19 23:59:36.223: INFO: Updating DaemonSet daemon-set Apr 19 23:59:39.246: INFO: Roll back the DaemonSet before rollout is complete Apr 19 23:59:39.252: INFO: Updating DaemonSet daemon-set Apr 19 23:59:39.252: INFO: Make sure DaemonSet rollback is complete Apr 19 23:59:39.259: INFO: Wrong image for pod: daemon-set-gpw8l. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 19 23:59:39.259: INFO: Pod daemon-set-gpw8l is not available Apr 19 23:59:39.282: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:59:40.286: INFO: Wrong image for pod: daemon-set-gpw8l. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 19 23:59:40.286: INFO: Pod daemon-set-gpw8l is not available Apr 19 23:59:40.291: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:59:41.287: INFO: Wrong image for pod: daemon-set-gpw8l. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 19 23:59:41.287: INFO: Pod daemon-set-gpw8l is not available Apr 19 23:59:41.817: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:59:42.287: INFO: Wrong image for pod: daemon-set-gpw8l. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 19 23:59:42.287: INFO: Pod daemon-set-gpw8l is not available Apr 19 23:59:42.291: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 19 23:59:43.294: INFO: Pod daemon-set-8t85j is not available Apr 19 23:59:43.299: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-48, will wait for the garbage collector to delete the pods Apr 19 23:59:43.364: INFO: Deleting DaemonSet.extensions daemon-set took: 5.68778ms Apr 19 23:59:43.464: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.217451ms Apr 19 23:59:52.959: INFO: Number of nodes with available pods: 0 Apr 19 23:59:52.959: INFO: Number of running nodes: 0, number of available pods: 0 Apr 19 23:59:52.961: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-48/daemonsets","resourceVersion":"9457027"},"items":null} Apr 19 23:59:52.964: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-48/pods","resourceVersion":"9457027"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:59:52.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-48" for this suite. • [SLOW TEST:20.930 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":63,"skipped":1134,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:59:52.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 19 23:59:53.450: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 19 23:59:55.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937593, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937593, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937593, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937593, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 19 23:59:58.810: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 19 23:59:58.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4771" for this suite. STEP: Destroying namespace "webhook-4771-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.082 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":64,"skipped":1138,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 19 23:59:59.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 20 00:00:02.267: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:00:02.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7366" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":1142,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:00:02.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 20 00:00:02.472: INFO: Waiting up to 5m0s for pod "downward-api-52e798d8-3761-4872-a12d-b915e786b5d8" in namespace "downward-api-8017" to be "Succeeded or Failed" Apr 20 00:00:02.504: INFO: Pod "downward-api-52e798d8-3761-4872-a12d-b915e786b5d8": Phase="Pending", Reason="", readiness=false. Elapsed: 32.307081ms Apr 20 00:00:04.572: INFO: Pod "downward-api-52e798d8-3761-4872-a12d-b915e786b5d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099959547s Apr 20 00:00:06.576: INFO: Pod "downward-api-52e798d8-3761-4872-a12d-b915e786b5d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104315095s STEP: Saw pod success Apr 20 00:00:06.576: INFO: Pod "downward-api-52e798d8-3761-4872-a12d-b915e786b5d8" satisfied condition "Succeeded or Failed" Apr 20 00:00:06.579: INFO: Trying to get logs from node latest-worker2 pod downward-api-52e798d8-3761-4872-a12d-b915e786b5d8 container dapi-container: STEP: delete the pod Apr 20 00:00:06.629: INFO: Waiting for pod downward-api-52e798d8-3761-4872-a12d-b915e786b5d8 to disappear Apr 20 00:00:06.634: INFO: Pod downward-api-52e798d8-3761-4872-a12d-b915e786b5d8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:00:06.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8017" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":1150,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:00:06.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:00:06.703: INFO: Waiting up to 5m0s for pod "busybox-user-65534-8de6591a-de05-415a-a120-28f234700bf2" in namespace "security-context-test-5324" to be "Succeeded or Failed" Apr 20 00:00:06.744: INFO: Pod "busybox-user-65534-8de6591a-de05-415a-a120-28f234700bf2": Phase="Pending", Reason="", readiness=false. Elapsed: 41.044529ms Apr 20 00:00:08.748: INFO: Pod "busybox-user-65534-8de6591a-de05-415a-a120-28f234700bf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045396859s Apr 20 00:00:10.752: INFO: Pod "busybox-user-65534-8de6591a-de05-415a-a120-28f234700bf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049530342s Apr 20 00:00:10.752: INFO: Pod "busybox-user-65534-8de6591a-de05-415a-a120-28f234700bf2" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:00:10.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5324" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:00:10.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-6038 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 20 00:00:10.823: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 20 00:00:10.857: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 20 00:00:12.862: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 20 00:00:14.861: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 20 00:00:16.861: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:00:18.862: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:00:20.862: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:00:22.862: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:00:24.861: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:00:26.861: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:00:28.863: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:00:30.861: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:00:32.860: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 20 00:00:32.868: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 20 00:00:34.887: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 20 00:00:38.946: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.157:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6038 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:00:38.946: INFO: >>> kubeConfig: /root/.kube/config I0420 00:00:38.978002 8 log.go:172] (0xc002aee580) (0xc000c85d60) Create stream I0420 00:00:38.978034 8 log.go:172] (0xc002aee580) (0xc000c85d60) Stream added, broadcasting: 1 I0420 00:00:38.979813 8 log.go:172] (0xc002aee580) Reply frame received for 1 I0420 00:00:38.979850 8 log.go:172] (0xc002aee580) (0xc000c85f40) Create stream I0420 00:00:38.979864 8 log.go:172] (0xc002aee580) (0xc000c85f40) Stream added, broadcasting: 3 I0420 00:00:38.980721 8 log.go:172] (0xc002aee580) Reply frame received for 3 I0420 00:00:38.980756 8 log.go:172] (0xc002aee580) (0xc001aa3ea0) Create stream I0420 00:00:38.980776 8 log.go:172] (0xc002aee580) (0xc001aa3ea0) Stream added, broadcasting: 5 I0420 00:00:38.981919 8 log.go:172] (0xc002aee580) Reply frame received for 5 I0420 00:00:39.071393 8 log.go:172] (0xc002aee580) Data frame received for 5 I0420 00:00:39.071472 8 log.go:172] (0xc002aee580) Data frame received for 3 I0420 00:00:39.071530 8 log.go:172] (0xc000c85f40) (3) Data frame handling I0420 00:00:39.071556 8 log.go:172] (0xc000c85f40) (3) Data frame sent I0420 00:00:39.071575 8 log.go:172] (0xc002aee580) Data frame received for 3 I0420 00:00:39.071596 8 log.go:172] (0xc001aa3ea0) (5) Data frame handling I0420 00:00:39.071668 8 log.go:172] (0xc000c85f40) (3) Data frame handling I0420 00:00:39.073430 8 log.go:172] (0xc002aee580) Data frame received for 1 I0420 00:00:39.073456 8 log.go:172] (0xc000c85d60) (1) Data frame handling I0420 00:00:39.073467 8 log.go:172] (0xc000c85d60) (1) Data frame sent I0420 00:00:39.073490 8 log.go:172] (0xc002aee580) (0xc000c85d60) Stream removed, broadcasting: 1 I0420 00:00:39.073646 8 log.go:172] (0xc002aee580) Go away received I0420 00:00:39.073885 8 log.go:172] (0xc002aee580) (0xc000c85d60) Stream removed, broadcasting: 1 I0420 00:00:39.073905 8 log.go:172] (0xc002aee580) (0xc000c85f40) Stream removed, broadcasting: 3 I0420 00:00:39.073915 8 log.go:172] (0xc002aee580) (0xc001aa3ea0) Stream removed, broadcasting: 5 Apr 20 00:00:39.073: INFO: Found all expected endpoints: [netserver-0] Apr 20 00:00:39.077: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.68:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6038 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:00:39.077: INFO: >>> kubeConfig: /root/.kube/config I0420 00:00:39.109981 8 log.go:172] (0xc002aeeb00) (0xc0013ec280) Create stream I0420 00:00:39.110005 8 log.go:172] (0xc002aeeb00) (0xc0013ec280) Stream added, broadcasting: 1 I0420 00:00:39.116786 8 log.go:172] (0xc002aeeb00) Reply frame received for 1 I0420 00:00:39.116830 8 log.go:172] (0xc002aeeb00) (0xc001aa2000) Create stream I0420 00:00:39.116844 8 log.go:172] (0xc002aeeb00) (0xc001aa2000) Stream added, broadcasting: 3 I0420 00:00:39.117964 8 log.go:172] (0xc002aeeb00) Reply frame received for 3 I0420 00:00:39.117990 8 log.go:172] (0xc002aeeb00) (0xc001a6c000) Create stream I0420 00:00:39.118002 8 log.go:172] (0xc002aeeb00) (0xc001a6c000) Stream added, broadcasting: 5 I0420 00:00:39.118803 8 log.go:172] (0xc002aeeb00) Reply frame received for 5 I0420 00:00:39.174432 8 log.go:172] (0xc002aeeb00) Data frame received for 5 I0420 00:00:39.174489 8 log.go:172] (0xc001a6c000) (5) Data frame handling I0420 00:00:39.174546 8 log.go:172] (0xc002aeeb00) Data frame received for 3 I0420 00:00:39.174599 8 log.go:172] (0xc001aa2000) (3) Data frame handling I0420 00:00:39.174690 8 log.go:172] (0xc001aa2000) (3) Data frame sent I0420 00:00:39.174761 8 log.go:172] (0xc002aeeb00) Data frame received for 3 I0420 00:00:39.174791 8 log.go:172] (0xc001aa2000) (3) Data frame handling I0420 00:00:39.175844 8 log.go:172] (0xc002aeeb00) Data frame received for 1 I0420 00:00:39.175857 8 log.go:172] (0xc0013ec280) (1) Data frame handling I0420 00:00:39.175877 8 log.go:172] (0xc0013ec280) (1) Data frame sent I0420 00:00:39.175893 8 log.go:172] (0xc002aeeb00) (0xc0013ec280) Stream removed, broadcasting: 1 I0420 00:00:39.175951 8 log.go:172] (0xc002aeeb00) (0xc0013ec280) Stream removed, broadcasting: 1 I0420 00:00:39.175963 8 log.go:172] (0xc002aeeb00) (0xc001aa2000) Stream removed, broadcasting: 3 I0420 00:00:39.176087 8 log.go:172] (0xc002aeeb00) (0xc001a6c000) Stream removed, broadcasting: 5 I0420 00:00:39.176116 8 log.go:172] (0xc002aeeb00) Go away received Apr 20 00:00:39.176: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:00:39.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6038" for this suite. • [SLOW TEST:28.422 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1180,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:00:39.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 20 00:00:39.260: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38a80393-2097-4886-9d92-6372d8448bf4" in namespace "downward-api-2945" to be "Succeeded or Failed" Apr 20 00:00:39.288: INFO: Pod "downwardapi-volume-38a80393-2097-4886-9d92-6372d8448bf4": Phase="Pending", Reason="", readiness=false. Elapsed: 27.829591ms Apr 20 00:00:41.750: INFO: Pod "downwardapi-volume-38a80393-2097-4886-9d92-6372d8448bf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.489412192s Apr 20 00:00:43.754: INFO: Pod "downwardapi-volume-38a80393-2097-4886-9d92-6372d8448bf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.493606951s STEP: Saw pod success Apr 20 00:00:43.754: INFO: Pod "downwardapi-volume-38a80393-2097-4886-9d92-6372d8448bf4" satisfied condition "Succeeded or Failed" Apr 20 00:00:43.758: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-38a80393-2097-4886-9d92-6372d8448bf4 container client-container: STEP: delete the pod Apr 20 00:00:43.816: INFO: Waiting for pod downwardapi-volume-38a80393-2097-4886-9d92-6372d8448bf4 to disappear Apr 20 00:00:43.819: INFO: Pod downwardapi-volume-38a80393-2097-4886-9d92-6372d8448bf4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:00:43.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2945" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1203,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:00:43.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 20 00:00:43.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2518' Apr 20 00:00:44.138: INFO: stderr: "" Apr 20 00:00:44.138: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 20 00:00:49.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2518 -o json' Apr 20 00:00:49.297: INFO: stderr: "" Apr 20 00:00:49.297: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-20T00:00:44Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2518\",\n \"resourceVersion\": \"9457445\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2518/pods/e2e-test-httpd-pod\",\n \"uid\": \"bfa1760b-a183-4556-bcb7-ebd37f1f3731\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-c49bk\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-c49bk\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-c49bk\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-20T00:00:44Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-20T00:00:47Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-20T00:00:47Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-20T00:00:44Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://3c762df8150679f657d2a9fdeb792af900ccd5da6ff3737bf935d6f07b14c53c\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-20T00:00:47Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.159\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.159\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-20T00:00:44Z\"\n }\n}\n" STEP: replace the image in the pod Apr 20 00:00:49.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2518' Apr 20 00:00:49.625: INFO: stderr: "" Apr 20 00:00:49.625: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Apr 20 00:00:49.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2518' Apr 20 00:00:52.820: INFO: stderr: "" Apr 20 00:00:52.821: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:00:52.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2518" for this suite. • [SLOW TEST:9.073 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":70,"skipped":1221,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:00:52.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 20 00:00:53.735: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 20 00:00:55.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937653, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937653, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937653, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722937653, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 20 00:00:58.936: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:00:59.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5489" for this suite. STEP: Destroying namespace "webhook-5489-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.247 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":71,"skipped":1222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:00:59.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-a066660d-2b7d-4c58-9f46-97dab3fe737a STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a066660d-2b7d-4c58-9f46-97dab3fe737a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:01:05.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9970" for this suite. • [SLOW TEST:6.156 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:01:05.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 20 00:01:05.446: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 20 00:01:14.532: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:01:14.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3153" for this suite. • [SLOW TEST:9.239 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1298,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:01:14.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:01:14.623: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:01:15.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3076" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":74,"skipped":1298,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:01:15.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-3763 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3763 to expose endpoints map[] Apr 20 00:01:15.372: INFO: Get endpoints failed (17.848473ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 20 00:01:16.376: INFO: successfully validated that service endpoint-test2 in namespace services-3763 exposes endpoints map[] (1.021560717s elapsed) STEP: Creating pod pod1 in namespace services-3763 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3763 to expose endpoints map[pod1:[80]] Apr 20 00:01:20.738: INFO: successfully validated that service endpoint-test2 in namespace services-3763 exposes endpoints map[pod1:[80]] (4.354118851s elapsed) STEP: Creating pod pod2 in namespace services-3763 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3763 to expose endpoints map[pod1:[80] pod2:[80]] Apr 20 00:01:23.839: INFO: successfully validated that service endpoint-test2 in namespace services-3763 exposes endpoints map[pod1:[80] pod2:[80]] (3.096979803s elapsed) STEP: Deleting pod pod1 in namespace services-3763 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3763 to expose endpoints map[pod2:[80]] Apr 20 00:01:24.927: INFO: successfully validated that service endpoint-test2 in namespace services-3763 exposes endpoints map[pod2:[80]] (1.082932427s elapsed) STEP: Deleting pod pod2 in namespace services-3763 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3763 to expose endpoints map[] Apr 20 00:01:25.957: INFO: successfully validated that service endpoint-test2 in namespace services-3763 exposes endpoints map[] (1.021057884s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:01:25.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3763" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:10.761 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":75,"skipped":1311,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:01:25.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-c74a0de2-9b96-44ed-82ab-456d41835549 STEP: Creating a pod to test consume configMaps Apr 20 00:01:26.076: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6ea45e28-160b-450a-9641-c666486347b6" in namespace "projected-6764" to be "Succeeded or Failed" Apr 20 00:01:26.096: INFO: Pod "pod-projected-configmaps-6ea45e28-160b-450a-9641-c666486347b6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.18275ms Apr 20 00:01:28.101: INFO: Pod "pod-projected-configmaps-6ea45e28-160b-450a-9641-c666486347b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024686714s Apr 20 00:01:30.105: INFO: Pod "pod-projected-configmaps-6ea45e28-160b-450a-9641-c666486347b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029268555s STEP: Saw pod success Apr 20 00:01:30.105: INFO: Pod "pod-projected-configmaps-6ea45e28-160b-450a-9641-c666486347b6" satisfied condition "Succeeded or Failed" Apr 20 00:01:30.108: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-6ea45e28-160b-450a-9641-c666486347b6 container projected-configmap-volume-test: STEP: delete the pod Apr 20 00:01:30.126: INFO: Waiting for pod pod-projected-configmaps-6ea45e28-160b-450a-9641-c666486347b6 to disappear Apr 20 00:01:30.130: INFO: Pod pod-projected-configmaps-6ea45e28-160b-450a-9641-c666486347b6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:01:30.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6764" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1358,"failed":0} SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:01:30.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-4654 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4654 STEP: Deleting pre-stop pod Apr 20 00:01:43.275: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:01:43.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4654" for this suite. • [SLOW TEST:13.197 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":77,"skipped":1360,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:01:43.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 20 00:01:43.383: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 20 00:01:43.409: INFO: Waiting for terminating namespaces to be deleted... Apr 20 00:01:43.412: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 20 00:01:43.429: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:01:43.429: INFO: Container kindnet-cni ready: true, restart count 0 Apr 20 00:01:43.429: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:01:43.429: INFO: Container kube-proxy ready: true, restart count 0 Apr 20 00:01:43.429: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 20 00:01:43.652: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:01:43.652: INFO: Container kube-proxy ready: true, restart count 0 Apr 20 00:01:43.652: INFO: server from prestop-4654 started at 2020-04-20 00:01:30 +0000 UTC (1 container statuses recorded) Apr 20 00:01:43.652: INFO: Container server ready: true, restart count 0 Apr 20 00:01:43.652: INFO: tester from prestop-4654 started at 2020-04-20 00:01:34 +0000 UTC (1 container statuses recorded) Apr 20 00:01:43.652: INFO: Container tester ready: true, restart count 0 Apr 20 00:01:43.652: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:01:43.652: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8b3066aa-8261-48ac-836c-ea712b5ac787 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-8b3066aa-8261-48ac-836c-ea712b5ac787 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-8b3066aa-8261-48ac-836c-ea712b5ac787 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:06:51.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2093" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.541 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":78,"skipped":1365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:06:51.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:06:55.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1553" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1411,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:06:55.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:07:12.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2921" for this suite. • [SLOW TEST:16.267 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":80,"skipped":1415,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:07:12.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-svwq STEP: Creating a pod to test atomic-volume-subpath Apr 20 00:07:12.366: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-svwq" in namespace "subpath-7585" to be "Succeeded or Failed" Apr 20 00:07:12.371: INFO: Pod "pod-subpath-test-secret-svwq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.348123ms Apr 20 00:07:14.374: INFO: Pod "pod-subpath-test-secret-svwq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008043414s Apr 20 00:07:16.396: INFO: Pod "pod-subpath-test-secret-svwq": Phase="Running", Reason="", readiness=true. Elapsed: 4.029740589s Apr 20 00:07:18.399: INFO: Pod "pod-subpath-test-secret-svwq": Phase="Running", Reason="", readiness=true. Elapsed: 6.033046798s Apr 20 00:07:20.402: INFO: Pod "pod-subpath-test-secret-svwq": Phase="Running", Reason="", readiness=true. Elapsed: 8.035944181s Apr 20 00:07:22.407: INFO: Pod "pod-subpath-test-secret-svwq": Phase="Running", Reason="", readiness=true. Elapsed: 10.040739477s Apr 20 00:07:24.411: INFO: Pod "pod-subpath-test-secret-svwq": Phase="Running", Reason="", readiness=true. Elapsed: 12.044799065s Apr 20 00:07:26.420: INFO: Pod "pod-subpath-test-secret-svwq": Phase="Running", Reason="", readiness=true. Elapsed: 14.053497383s Apr 20 00:07:28.424: INFO: Pod "pod-subpath-test-secret-svwq": Phase="Running", Reason="", readiness=true. Elapsed: 16.057845578s Apr 20 00:07:30.427: INFO: Pod "pod-subpath-test-secret-svwq": Phase="Running", Reason="", readiness=true. Elapsed: 18.061262627s Apr 20 00:07:32.432: INFO: Pod "pod-subpath-test-secret-svwq": Phase="Running", Reason="", readiness=true. Elapsed: 20.065456685s Apr 20 00:07:34.437: INFO: Pod "pod-subpath-test-secret-svwq": Phase="Running", Reason="", readiness=true. Elapsed: 22.070313315s Apr 20 00:07:36.444: INFO: Pod "pod-subpath-test-secret-svwq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.078170669s STEP: Saw pod success Apr 20 00:07:36.444: INFO: Pod "pod-subpath-test-secret-svwq" satisfied condition "Succeeded or Failed" Apr 20 00:07:36.448: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-svwq container test-container-subpath-secret-svwq: STEP: delete the pod Apr 20 00:07:36.495: INFO: Waiting for pod pod-subpath-test-secret-svwq to disappear Apr 20 00:07:36.498: INFO: Pod pod-subpath-test-secret-svwq no longer exists STEP: Deleting pod pod-subpath-test-secret-svwq Apr 20 00:07:36.498: INFO: Deleting pod "pod-subpath-test-secret-svwq" in namespace "subpath-7585" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:07:36.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7585" for this suite. • [SLOW TEST:24.263 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":81,"skipped":1424,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:07:36.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 20 00:07:37.181: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 20 00:07:39.190: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938057, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938057, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938057, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938057, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 20 00:07:42.218: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:07:42.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-308" for this suite. STEP: Destroying namespace "webhook-308-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.979 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":82,"skipped":1427,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:07:42.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6622 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6622 STEP: Creating statefulset with conflicting port in namespace statefulset-6622 STEP: Waiting until pod test-pod will start running in namespace statefulset-6622 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6622 Apr 20 00:07:46.587: INFO: Observed stateful pod in namespace: statefulset-6622, name: ss-0, uid: c56d17d0-f26a-4c50-84da-6b8cb97678b3, status phase: Pending. Waiting for statefulset controller to delete. Apr 20 00:07:52.724: INFO: Observed stateful pod in namespace: statefulset-6622, name: ss-0, uid: c56d17d0-f26a-4c50-84da-6b8cb97678b3, status phase: Failed. Waiting for statefulset controller to delete. Apr 20 00:07:52.732: INFO: Observed stateful pod in namespace: statefulset-6622, name: ss-0, uid: c56d17d0-f26a-4c50-84da-6b8cb97678b3, status phase: Failed. Waiting for statefulset controller to delete. Apr 20 00:07:52.743: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6622 STEP: Removing pod with conflicting port in namespace statefulset-6622 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6622 and will be in running state Apr 20 00:12:52.848: FAIL: Timed out after 300.000s. Expected <*errors.errorString | 0xc002b54270>: { s: "pod ss-0 is not in running phase: Pending", } to be nil Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func10.2.12() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:782 +0x11df k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000b9fc00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324 k8s.io/kubernetes/test/e2e.TestE2E(0xc000b9fc00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc000b9fc00, 0x4ae7658) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 20 00:12:52.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-6622' Apr 20 00:12:55.263: INFO: stderr: "" Apr 20 00:12:55.263: INFO: stdout: "Name: ss-0\nNamespace: statefulset-6622\nPriority: 0\nNode: latest-worker/\nLabels: baz=blah\n controller-revision-hash=ss-84f8fd7c56\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: \nStatus: Pending\nIP: \nIPs: \nControlled By: StatefulSet/ss\nContainers:\n webserver:\n Image: docker.io/library/httpd:2.4.38-alpine\n Port: 21017/TCP\n Host Port: 21017/TCP\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-h4jkc (ro)\nVolumes:\n default-token-h4jkc:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-h4jkc\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulled 5m1s kubelet, latest-worker Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\n Normal Created 5m1s kubelet, latest-worker Created container webserver\n Normal Started 5m kubelet, latest-worker Started container webserver\n" Apr 20 00:12:55.263: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-6622 Priority: 0 Node: latest-worker/ Labels: baz=blah controller-revision-hash=ss-84f8fd7c56 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: Status: Pending IP: IPs: Controlled By: StatefulSet/ss Containers: webserver: Image: docker.io/library/httpd:2.4.38-alpine Port: 21017/TCP Host Port: 21017/TCP Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-h4jkc (ro) Volumes: default-token-h4jkc: Type: Secret (a volume populated by a Secret) SecretName: default-token-h4jkc Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 5m1s kubelet, latest-worker Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine Normal Created 5m1s kubelet, latest-worker Created container webserver Normal Started 5m kubelet, latest-worker Started container webserver Apr 20 00:12:55.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-6622 --tail=100' Apr 20 00:12:55.374: INFO: stderr: "" Apr 20 00:12:55.374: INFO: stdout: "[Mon Apr 20 00:07:55.248323 2020] [mpm_event:notice] [pid 1:tid 140485080353640] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon Apr 20 00:07:55.248389 2020] [core:notice] [pid 1:tid 140485080353640] AH00094: Command line: 'httpd -D FOREGROUND'\n" Apr 20 00:12:55.374: INFO: Last 100 log lines of ss-0: [Mon Apr 20 00:07:55.248323 2020] [mpm_event:notice] [pid 1:tid 140485080353640] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Mon Apr 20 00:07:55.248389 2020] [core:notice] [pid 1:tid 140485080353640] AH00094: Command line: 'httpd -D FOREGROUND' Apr 20 00:12:55.374: INFO: Deleting all statefulset in ns statefulset-6622 Apr 20 00:12:55.383: INFO: Scaling statefulset ss to 0 Apr 20 00:13:05.399: INFO: Waiting for statefulset status.replicas updated to 0 Apr 20 00:13:05.408: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 STEP: Collecting events from namespace "statefulset-6622". STEP: Found 14 events. Apr 20 00:13:05.424: INFO: At 2020-04-20 00:07:42 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Apr 20 00:13:05.424: INFO: At 2020-04-20 00:07:42 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Apr 20 00:13:05.424: INFO: At 2020-04-20 00:07:42 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-6622/ss is recreating failed Pod ss-0 Apr 20 00:13:05.424: INFO: At 2020-04-20 00:07:42 +0000 UTC - event for ss-0: {kubelet latest-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Apr 20 00:13:05.424: INFO: At 2020-04-20 00:07:42 +0000 UTC - event for ss-0: {kubelet latest-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Apr 20 00:13:05.424: INFO: At 2020-04-20 00:07:43 +0000 UTC - event for test-pod: {kubelet latest-worker} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine Apr 20 00:13:05.424: INFO: At 2020-04-20 00:07:44 +0000 UTC - event for test-pod: {kubelet latest-worker} Created: Created container webserver Apr 20 00:13:05.424: INFO: At 2020-04-20 00:07:44 +0000 UTC - event for test-pod: {kubelet latest-worker} Started: Started container webserver Apr 20 00:13:05.424: INFO: At 2020-04-20 00:07:52 +0000 UTC - event for ss-0: {kubelet latest-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Apr 20 00:13:05.424: INFO: At 2020-04-20 00:07:52 +0000 UTC - event for test-pod: {kubelet latest-worker} Killing: Stopping container webserver Apr 20 00:13:05.424: INFO: At 2020-04-20 00:07:54 +0000 UTC - event for ss-0: {kubelet latest-worker} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine Apr 20 00:13:05.424: INFO: At 2020-04-20 00:07:54 +0000 UTC - event for ss-0: {kubelet latest-worker} Created: Created container webserver Apr 20 00:13:05.424: INFO: At 2020-04-20 00:07:55 +0000 UTC - event for ss-0: {kubelet latest-worker} Started: Started container webserver Apr 20 00:13:05.424: INFO: At 2020-04-20 00:12:55 +0000 UTC - event for ss-0: {kubelet latest-worker} Killing: Stopping container webserver Apr 20 00:13:05.426: INFO: POD NODE PHASE GRACE CONDITIONS Apr 20 00:13:05.426: INFO: Apr 20 00:13:05.428: INFO: Logging node info for node latest-control-plane Apr 20 00:13:05.430: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane /api/v1/nodes/latest-control-plane 6f844e63-ec06-4ae6-b2e5-2db982693de5 9459682 0 2020-03-15 18:27:32 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-04-20 00:10:25 +0000 UTC,LastTransitionTime:2020-03-15 18:27:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-04-20 00:10:25 +0000 UTC,LastTransitionTime:2020-03-15 18:27:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-04-20 00:10:25 +0000 UTC,LastTransitionTime:2020-03-15 18:27:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-04-20 00:10:25 +0000 UTC,LastTransitionTime:2020-03-15 18:28:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.11,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:96fd1b5d260b433d8f617f455164eb5a,SystemUUID:611bedf3-8581-4e6e-a43b-01a437bb59ad,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:144347953,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:132100734,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:131180355,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:111937841,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.11],SizeBytes:36513375,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 20 00:13:05.431: INFO: Logging kubelet events for node latest-control-plane Apr 20 00:13:05.432: INFO: Logging pods the kubelet thinks is on node latest-control-plane Apr 20 00:13:05.450: INFO: kindnet-sx5s7 started at 2020-03-15 18:27:50 +0000 UTC (0+1 container statuses recorded) Apr 20 00:13:05.450: INFO: Container kindnet-cni ready: true, restart count 0 Apr 20 00:13:05.450: INFO: local-path-provisioner-7745554f7f-fmsmz started at 2020-03-15 18:28:06 +0000 UTC (0+1 container statuses recorded) Apr 20 00:13:05.451: INFO: Container local-path-provisioner ready: true, restart count 0 Apr 20 00:13:05.451: INFO: coredns-6955765f44-lq4t7 started at 2020-03-15 18:28:07 +0000 UTC (0+1 container statuses recorded) Apr 20 00:13:05.451: INFO: Container coredns ready: true, restart count 0 Apr 20 00:13:05.451: INFO: coredns-6955765f44-f7wtl started at 2020-03-15 18:28:07 +0000 UTC (0+1 container statuses recorded) Apr 20 00:13:05.451: INFO: Container coredns ready: true, restart count 0 Apr 20 00:13:05.451: INFO: etcd-latest-control-plane started at 2020-03-15 18:27:36 +0000 UTC (0+1 container statuses recorded) Apr 20 00:13:05.451: INFO: Container etcd ready: true, restart count 0 Apr 20 00:13:05.451: INFO: kube-proxy-jpqvf started at 2020-03-15 18:27:50 +0000 UTC (0+1 container statuses recorded) Apr 20 00:13:05.451: INFO: Container kube-proxy ready: true, restart count 0 Apr 20 00:13:05.451: INFO: kube-scheduler-latest-control-plane started at 2020-03-15 18:27:36 +0000 UTC (0+1 container statuses recorded) Apr 20 00:13:05.451: INFO: Container kube-scheduler ready: true, restart count 1 Apr 20 00:13:05.451: INFO: kube-apiserver-latest-control-plane started at 2020-03-15 18:27:36 +0000 UTC (0+1 container statuses recorded) Apr 20 00:13:05.451: INFO: Container kube-apiserver ready: true, restart count 0 Apr 20 00:13:05.451: INFO: kube-controller-manager-latest-control-plane started at 2020-03-15 18:27:36 +0000 UTC (0+1 container statuses recorded) Apr 20 00:13:05.451: INFO: Container kube-controller-manager ready: true, restart count 1 W0420 00:13:05.454086 8 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 20 00:13:05.538: INFO: Latency metrics for node latest-control-plane Apr 20 00:13:05.538: INFO: Logging node info for node latest-worker Apr 20 00:13:05.541: INFO: Node Info: &Node{ObjectMeta:{latest-worker /api/v1/nodes/latest-worker 98bcda58-a897-4edf-8857-b99f8c93a9dc 9460108 0 2020-03-15 18:28:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-04-20 00:12:52 +0000 UTC,LastTransitionTime:2020-03-15 18:28:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-04-20 00:12:52 +0000 UTC,LastTransitionTime:2020-03-15 18:28:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-04-20 00:12:52 +0000 UTC,LastTransitionTime:2020-03-15 18:28:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-04-20 00:12:52 +0000 UTC,LastTransitionTime:2020-03-15 18:28:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.13,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ded315e8ce8e461b8f5fb393e0d16a78,SystemUUID:e785bdde-e4ba-4979-bd97-238cd0b6bc89,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:144347953,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:132100734,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:131180355,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:950d6d7ef36bc5fb621d32ec22f46e2406cedc6ea6bb4b0f681d94991fae94f9 docker.io/aquasec/kube-hunter:latest],SizeBytes:124685171,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:769eeab9e89aa7dd37dadc70cf3000be3b056a7474c422f65973944657753ac3],SizeBytes:124465673,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:717e3f51673ced61d9bf2e60081d914c74b02909cfb49f01fed4e2abe5b2b0cc],SizeBytes:124450316,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:e10ee22e7b56d08b7cb7da2a390863c445d66a7284294cee8c9decbfb3ba4359 k8s.gcr.io/etcd:3.4.4],SizeBytes:118972812,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:111937841,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.11],SizeBytes:36513375,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:ee55386ef35bea93a3a0900fd714038bebd156e0448addf839f38093dbbaace9 docker.io/aquasec/kube-bench:latest],SizeBytes:8029111,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:5e21ed2c67f8015ed449f4402c942d8200a0b59cc0b518744e2e45a3de219ba9],SizeBytes:8028777,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12 docker.io/library/busybox@sha256:89b54451a47954c0422d873d438509dae87d478f1cb5d67fb130072f67ca5d25 docker.io/library/busybox:latest],SizeBytes:764739,},ContainerImage{Names:[docker.io/library/busybox@sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135],SizeBytes:764687,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 20 00:13:05.542: INFO: Logging kubelet events for node latest-worker Apr 20 00:13:05.544: INFO: Logging pods the kubelet thinks is on node latest-worker Apr 20 00:13:05.549: INFO: kindnet-vnjgh started at 2020-03-15 18:28:07 +0000 UTC (0+1 container statuses recorded) Apr 20 00:13:05.549: INFO: Container kindnet-cni ready: true, restart count 0 Apr 20 00:13:05.549: INFO: kube-proxy-s9v6p started at 2020-03-15 18:28:07 +0000 UTC (0+1 container statuses recorded) Apr 20 00:13:05.549: INFO: Container kube-proxy ready: true, restart count 0 W0420 00:13:05.553044 8 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 20 00:13:05.603: INFO: Latency metrics for node latest-worker Apr 20 00:13:05.603: INFO: Logging node info for node latest-worker2 Apr 20 00:13:05.606: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 /api/v1/nodes/latest-worker2 9565903b-7ffe-4e7a-aa51-04476604a6d3 9459881 0 2020-03-15 18:28:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-04-20 00:11:33 +0000 UTC,LastTransitionTime:2020-03-15 18:28:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-04-20 00:11:33 +0000 UTC,LastTransitionTime:2020-03-15 18:28:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-04-20 00:11:33 +0000 UTC,LastTransitionTime:2020-03-15 18:28:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-04-20 00:11:33 +0000 UTC,LastTransitionTime:2020-03-15 18:28:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.12,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8ebeddb5d9794194b18fe17773f1735f,SystemUUID:bf79d085-e343-4740-b85c-023bec44e003,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:144347953,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:132100734,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:131180355,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:950d6d7ef36bc5fb621d32ec22f46e2406cedc6ea6bb4b0f681d94991fae94f9 docker.io/aquasec/kube-hunter:latest],SizeBytes:124685171,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:769eeab9e89aa7dd37dadc70cf3000be3b056a7474c422f65973944657753ac3],SizeBytes:124465673,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:717e3f51673ced61d9bf2e60081d914c74b02909cfb49f01fed4e2abe5b2b0cc],SizeBytes:124450316,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:563f44851d413c7199a0a8a2a13df1e98bee48229e19f4917e6da68e5482df6e],SizeBytes:123995068,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:634b39c4e07b257dc26c5b96ccf4abb8eb2f558a6fa375e5236e5facc1c6acab],SizeBytes:123703809,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:111937841,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.11],SizeBytes:36513375,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:ee55386ef35bea93a3a0900fd714038bebd156e0448addf839f38093dbbaace9 docker.io/aquasec/kube-bench:latest],SizeBytes:8029111,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:5e21ed2c67f8015ed449f4402c942d8200a0b59cc0b518744e2e45a3de219ba9],SizeBytes:8028777,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12 docker.io/library/busybox@sha256:89b54451a47954c0422d873d438509dae87d478f1cb5d67fb130072f67ca5d25 docker.io/library/busybox:latest],SizeBytes:764739,},ContainerImage{Names:[docker.io/library/busybox@sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135],SizeBytes:764687,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 20 00:13:05.607: INFO: Logging kubelet events for node latest-worker2 Apr 20 00:13:05.609: INFO: Logging pods the kubelet thinks is on node latest-worker2 Apr 20 00:13:05.629: INFO: kindnet-zq6gp started at 2020-03-15 18:28:07 +0000 UTC (0+1 container statuses recorded) Apr 20 00:13:05.629: INFO: Container kindnet-cni ready: true, restart count 0 Apr 20 00:13:05.629: INFO: kube-proxy-c5xlk started at 2020-03-15 18:28:07 +0000 UTC (0+1 container statuses recorded) Apr 20 00:13:05.629: INFO: Container kube-proxy ready: true, restart count 0 W0420 00:13:05.632954 8 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 20 00:13:05.676: INFO: Latency metrics for node latest-worker2 Apr 20 00:13:05.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6622" for this suite. • Failure [323.200 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:12:52.848: Timed out after 300.000s. Expected <*errors.errorString | 0xc002b54270>: { s: "pod ss-0 is not in running phase: Pending", } to be nil /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:782 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":82,"skipped":1435,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:13:05.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:13:05.762: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:13:12.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9381" for this suite. • [SLOW TEST:6.369 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":83,"skipped":1494,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:13:12.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-e9c24cf8-8ff9-4a82-b679-0e12a9aaebd1 STEP: Creating a pod to test consume secrets Apr 20 00:13:12.147: INFO: Waiting up to 5m0s for pod "pod-secrets-e096776a-a523-4237-baa6-38136441cb35" in namespace "secrets-812" to be "Succeeded or Failed" Apr 20 00:13:12.155: INFO: Pod "pod-secrets-e096776a-a523-4237-baa6-38136441cb35": Phase="Pending", Reason="", readiness=false. Elapsed: 7.362106ms Apr 20 00:13:14.158: INFO: Pod "pod-secrets-e096776a-a523-4237-baa6-38136441cb35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01080388s Apr 20 00:13:16.402: INFO: Pod "pod-secrets-e096776a-a523-4237-baa6-38136441cb35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.25482695s STEP: Saw pod success Apr 20 00:13:16.402: INFO: Pod "pod-secrets-e096776a-a523-4237-baa6-38136441cb35" satisfied condition "Succeeded or Failed" Apr 20 00:13:16.406: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e096776a-a523-4237-baa6-38136441cb35 container secret-env-test: STEP: delete the pod Apr 20 00:13:17.346: INFO: Waiting for pod pod-secrets-e096776a-a523-4237-baa6-38136441cb35 to disappear Apr 20 00:13:17.438: INFO: Pod pod-secrets-e096776a-a523-4237-baa6-38136441cb35 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:13:17.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-812" for this suite. • [SLOW TEST:5.451 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1514,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:13:17.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-88cc0a0c-86ef-4700-8bc0-fd0938053c33 STEP: Creating a pod to test consume configMaps Apr 20 00:13:17.631: INFO: Waiting up to 5m0s for pod "pod-configmaps-dd31a262-28ff-4429-8143-eb392041be7f" in namespace "configmap-3568" to be "Succeeded or Failed" Apr 20 00:13:17.659: INFO: Pod "pod-configmaps-dd31a262-28ff-4429-8143-eb392041be7f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.255005ms Apr 20 00:13:19.690: INFO: Pod "pod-configmaps-dd31a262-28ff-4429-8143-eb392041be7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058635306s Apr 20 00:13:21.701: INFO: Pod "pod-configmaps-dd31a262-28ff-4429-8143-eb392041be7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06902237s STEP: Saw pod success Apr 20 00:13:21.701: INFO: Pod "pod-configmaps-dd31a262-28ff-4429-8143-eb392041be7f" satisfied condition "Succeeded or Failed" Apr 20 00:13:21.714: INFO: Trying to get logs from node latest-worker pod pod-configmaps-dd31a262-28ff-4429-8143-eb392041be7f container configmap-volume-test: STEP: delete the pod Apr 20 00:13:21.751: INFO: Waiting for pod pod-configmaps-dd31a262-28ff-4429-8143-eb392041be7f to disappear Apr 20 00:13:21.755: INFO: Pod pod-configmaps-dd31a262-28ff-4429-8143-eb392041be7f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:13:21.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3568" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1515,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:13:21.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-d0c96d9e-50fe-4b20-8fb0-7fee53f07bd6 STEP: Creating a pod to test consume secrets Apr 20 00:13:21.842: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b361b3e5-59aa-49d0-a8f1-9a9e8b2e4272" in namespace "projected-4277" to be "Succeeded or Failed" Apr 20 00:13:21.863: INFO: Pod "pod-projected-secrets-b361b3e5-59aa-49d0-a8f1-9a9e8b2e4272": Phase="Pending", Reason="", readiness=false. Elapsed: 20.912942ms Apr 20 00:13:23.924: INFO: Pod "pod-projected-secrets-b361b3e5-59aa-49d0-a8f1-9a9e8b2e4272": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082145915s Apr 20 00:13:25.928: INFO: Pod "pod-projected-secrets-b361b3e5-59aa-49d0-a8f1-9a9e8b2e4272": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086311655s STEP: Saw pod success Apr 20 00:13:25.928: INFO: Pod "pod-projected-secrets-b361b3e5-59aa-49d0-a8f1-9a9e8b2e4272" satisfied condition "Succeeded or Failed" Apr 20 00:13:25.931: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-b361b3e5-59aa-49d0-a8f1-9a9e8b2e4272 container projected-secret-volume-test: STEP: delete the pod Apr 20 00:13:25.970: INFO: Waiting for pod pod-projected-secrets-b361b3e5-59aa-49d0-a8f1-9a9e8b2e4272 to disappear Apr 20 00:13:25.975: INFO: Pod pod-projected-secrets-b361b3e5-59aa-49d0-a8f1-9a9e8b2e4272 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:13:25.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4277" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1578,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:13:25.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-876031ec-d7b4-4957-bbc4-86c7fe3152d1 STEP: Creating a pod to test consume secrets Apr 20 00:13:26.111: INFO: Waiting up to 5m0s for pod "pod-secrets-2599ef14-5b88-48fb-974b-5dbc327b16fd" in namespace "secrets-3276" to be "Succeeded or Failed" Apr 20 00:13:26.119: INFO: Pod "pod-secrets-2599ef14-5b88-48fb-974b-5dbc327b16fd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.700496ms Apr 20 00:13:28.168: INFO: Pod "pod-secrets-2599ef14-5b88-48fb-974b-5dbc327b16fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056888421s Apr 20 00:13:30.180: INFO: Pod "pod-secrets-2599ef14-5b88-48fb-974b-5dbc327b16fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069080163s STEP: Saw pod success Apr 20 00:13:30.180: INFO: Pod "pod-secrets-2599ef14-5b88-48fb-974b-5dbc327b16fd" satisfied condition "Succeeded or Failed" Apr 20 00:13:30.183: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2599ef14-5b88-48fb-974b-5dbc327b16fd container secret-volume-test: STEP: delete the pod Apr 20 00:13:30.198: INFO: Waiting for pod pod-secrets-2599ef14-5b88-48fb-974b-5dbc327b16fd to disappear Apr 20 00:13:30.203: INFO: Pod pod-secrets-2599ef14-5b88-48fb-974b-5dbc327b16fd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:13:30.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3276" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1600,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:13:30.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:13:30.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5209" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1603,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:13:30.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8875.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8875.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 20 00:13:36.498: INFO: DNS probes using dns-8875/dns-test-57f48b51-354e-42c7-858d-f87d1dd7254f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:13:36.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8875" for this suite. • [SLOW TEST:6.207 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":89,"skipped":1620,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:13:36.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:13:36.640: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-b6a17f6b-f77f-4ecb-a165-620f378d6eca" in namespace "security-context-test-3882" to be "Succeeded or Failed" Apr 20 00:13:36.661: INFO: Pod "alpine-nnp-false-b6a17f6b-f77f-4ecb-a165-620f378d6eca": Phase="Pending", Reason="", readiness=false. Elapsed: 20.521601ms Apr 20 00:13:38.713: INFO: Pod "alpine-nnp-false-b6a17f6b-f77f-4ecb-a165-620f378d6eca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073104868s Apr 20 00:13:40.718: INFO: Pod "alpine-nnp-false-b6a17f6b-f77f-4ecb-a165-620f378d6eca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077517341s Apr 20 00:13:40.718: INFO: Pod "alpine-nnp-false-b6a17f6b-f77f-4ecb-a165-620f378d6eca" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:13:40.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3882" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1638,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:13:40.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 20 00:13:41.494: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 20 00:13:43.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938421, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938421, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938421, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938421, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 20 00:13:46.546: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 20 00:13:51.341: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-508 to-be-attached-pod -i -c=container1' Apr 20 00:13:51.466: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:13:51.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-508" for this suite. STEP: Destroying namespace "webhook-508-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.815 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":91,"skipped":1669,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:13:51.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 20 00:13:59.612: INFO: 0 pods remaining Apr 20 00:13:59.612: INFO: 0 pods has nil DeletionTimestamp Apr 20 00:13:59.612: INFO: Apr 20 00:14:00.846: INFO: 0 pods remaining Apr 20 00:14:00.846: INFO: 0 pods has nil DeletionTimestamp Apr 20 00:14:00.846: INFO: STEP: Gathering metrics W0420 00:14:01.623375 8 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 20 00:14:01.623: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:14:01.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8259" for this suite. • [SLOW TEST:10.081 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":92,"skipped":1687,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:14:01.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:14:01.882: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:14:06.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9862" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1716,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:14:06.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 20 00:14:06.213: INFO: Waiting up to 5m0s for pod "pod-cf3c4953-91aa-44cf-b882-2858e9752f41" in namespace "emptydir-1736" to be "Succeeded or Failed" Apr 20 00:14:06.224: INFO: Pod "pod-cf3c4953-91aa-44cf-b882-2858e9752f41": Phase="Pending", Reason="", readiness=false. Elapsed: 11.003529ms Apr 20 00:14:08.300: INFO: Pod "pod-cf3c4953-91aa-44cf-b882-2858e9752f41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086896872s Apr 20 00:14:10.304: INFO: Pod "pod-cf3c4953-91aa-44cf-b882-2858e9752f41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09066398s Apr 20 00:14:12.308: INFO: Pod "pod-cf3c4953-91aa-44cf-b882-2858e9752f41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094214925s STEP: Saw pod success Apr 20 00:14:12.308: INFO: Pod "pod-cf3c4953-91aa-44cf-b882-2858e9752f41" satisfied condition "Succeeded or Failed" Apr 20 00:14:12.310: INFO: Trying to get logs from node latest-worker pod pod-cf3c4953-91aa-44cf-b882-2858e9752f41 container test-container: STEP: delete the pod Apr 20 00:14:12.343: INFO: Waiting for pod pod-cf3c4953-91aa-44cf-b882-2858e9752f41 to disappear Apr 20 00:14:12.348: INFO: Pod pod-cf3c4953-91aa-44cf-b882-2858e9752f41 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:14:12.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1736" for this suite. • [SLOW TEST:6.198 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1743,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:14:12.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 20 00:14:16.955: INFO: Successfully updated pod "pod-update-80ba4aba-23f9-419f-b7a0-767d98f59b06" STEP: verifying the updated pod is in kubernetes Apr 20 00:14:16.967: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:14:16.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7835" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1748,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:14:16.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-d994d595-9e4e-4646-9eec-06750ba3d421 STEP: Creating a pod to test consume secrets Apr 20 00:14:17.045: INFO: Waiting up to 5m0s for pod "pod-secrets-c490acba-d6cd-4b40-b62f-f139d588b4a9" in namespace "secrets-3329" to be "Succeeded or Failed" Apr 20 00:14:17.059: INFO: Pod "pod-secrets-c490acba-d6cd-4b40-b62f-f139d588b4a9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.687868ms Apr 20 00:14:19.062: INFO: Pod "pod-secrets-c490acba-d6cd-4b40-b62f-f139d588b4a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017126196s Apr 20 00:14:21.067: INFO: Pod "pod-secrets-c490acba-d6cd-4b40-b62f-f139d588b4a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021816521s STEP: Saw pod success Apr 20 00:14:21.067: INFO: Pod "pod-secrets-c490acba-d6cd-4b40-b62f-f139d588b4a9" satisfied condition "Succeeded or Failed" Apr 20 00:14:21.070: INFO: Trying to get logs from node latest-worker pod pod-secrets-c490acba-d6cd-4b40-b62f-f139d588b4a9 container secret-volume-test: STEP: delete the pod Apr 20 00:14:21.114: INFO: Waiting for pod pod-secrets-c490acba-d6cd-4b40-b62f-f139d588b4a9 to disappear Apr 20 00:14:21.127: INFO: Pod pod-secrets-c490acba-d6cd-4b40-b62f-f139d588b4a9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:14:21.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3329" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1774,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:14:21.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 20 00:14:21.207: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:14:36.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2252" for this suite. • [SLOW TEST:15.495 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":97,"skipped":1826,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:14:36.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 20 00:14:36.715: INFO: Waiting up to 5m0s for pod "pod-b23f86d9-fca3-425e-92ef-af72969d6c76" in namespace "emptydir-2356" to be "Succeeded or Failed" Apr 20 00:14:36.772: INFO: Pod "pod-b23f86d9-fca3-425e-92ef-af72969d6c76": Phase="Pending", Reason="", readiness=false. Elapsed: 56.880155ms Apr 20 00:14:38.827: INFO: Pod "pod-b23f86d9-fca3-425e-92ef-af72969d6c76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111982808s Apr 20 00:14:40.831: INFO: Pod "pod-b23f86d9-fca3-425e-92ef-af72969d6c76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115468418s STEP: Saw pod success Apr 20 00:14:40.831: INFO: Pod "pod-b23f86d9-fca3-425e-92ef-af72969d6c76" satisfied condition "Succeeded or Failed" Apr 20 00:14:40.870: INFO: Trying to get logs from node latest-worker2 pod pod-b23f86d9-fca3-425e-92ef-af72969d6c76 container test-container: STEP: delete the pod Apr 20 00:14:40.892: INFO: Waiting for pod pod-b23f86d9-fca3-425e-92ef-af72969d6c76 to disappear Apr 20 00:14:40.895: INFO: Pod pod-b23f86d9-fca3-425e-92ef-af72969d6c76 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:14:40.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2356" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":98,"skipped":1884,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:14:40.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Apr 20 00:14:41.118: INFO: Waiting up to 5m0s for pod "client-containers-5cb9286b-275b-4412-a73c-c1b58cf70752" in namespace "containers-5791" to be "Succeeded or Failed" Apr 20 00:14:41.139: INFO: Pod "client-containers-5cb9286b-275b-4412-a73c-c1b58cf70752": Phase="Pending", Reason="", readiness=false. Elapsed: 21.755214ms Apr 20 00:14:43.143: INFO: Pod "client-containers-5cb9286b-275b-4412-a73c-c1b58cf70752": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025582441s Apr 20 00:14:45.147: INFO: Pod "client-containers-5cb9286b-275b-4412-a73c-c1b58cf70752": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029893826s STEP: Saw pod success Apr 20 00:14:45.148: INFO: Pod "client-containers-5cb9286b-275b-4412-a73c-c1b58cf70752" satisfied condition "Succeeded or Failed" Apr 20 00:14:45.151: INFO: Trying to get logs from node latest-worker2 pod client-containers-5cb9286b-275b-4412-a73c-c1b58cf70752 container test-container: STEP: delete the pod Apr 20 00:14:45.171: INFO: Waiting for pod client-containers-5cb9286b-275b-4412-a73c-c1b58cf70752 to disappear Apr 20 00:14:45.175: INFO: Pod client-containers-5cb9286b-275b-4412-a73c-c1b58cf70752 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:14:45.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5791" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1884,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:14:45.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:14:50.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5613" for this suite. • [SLOW TEST:5.150 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":100,"skipped":1912,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:14:50.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 20 00:14:50.420: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5534 /api/v1/namespaces/watch-5534/configmaps/e2e-watch-test-watch-closed 3ef6723c-2e40-4b44-8d21-4a7e4b3a5cdb 9461160 0 2020-04-20 00:14:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 20 00:14:50.420: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5534 /api/v1/namespaces/watch-5534/configmaps/e2e-watch-test-watch-closed 3ef6723c-2e40-4b44-8d21-4a7e4b3a5cdb 9461161 0 2020-04-20 00:14:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 20 00:14:50.460: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5534 /api/v1/namespaces/watch-5534/configmaps/e2e-watch-test-watch-closed 3ef6723c-2e40-4b44-8d21-4a7e4b3a5cdb 9461163 0 2020-04-20 00:14:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 20 00:14:50.460: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5534 /api/v1/namespaces/watch-5534/configmaps/e2e-watch-test-watch-closed 3ef6723c-2e40-4b44-8d21-4a7e4b3a5cdb 9461165 0 2020-04-20 00:14:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:14:50.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5534" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":101,"skipped":1914,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:14:50.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-3312/secret-test-9dce8bdb-5e88-43d4-93c1-ff88564369b0 STEP: Creating a pod to test consume secrets Apr 20 00:14:50.639: INFO: Waiting up to 5m0s for pod "pod-configmaps-d07455d2-2f32-4a57-8513-431cb19635e8" in namespace "secrets-3312" to be "Succeeded or Failed" Apr 20 00:14:50.645: INFO: Pod "pod-configmaps-d07455d2-2f32-4a57-8513-431cb19635e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.248019ms Apr 20 00:14:52.649: INFO: Pod "pod-configmaps-d07455d2-2f32-4a57-8513-431cb19635e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010422345s Apr 20 00:14:54.653: INFO: Pod "pod-configmaps-d07455d2-2f32-4a57-8513-431cb19635e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014573273s STEP: Saw pod success Apr 20 00:14:54.653: INFO: Pod "pod-configmaps-d07455d2-2f32-4a57-8513-431cb19635e8" satisfied condition "Succeeded or Failed" Apr 20 00:14:54.656: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d07455d2-2f32-4a57-8513-431cb19635e8 container env-test: STEP: delete the pod Apr 20 00:14:54.689: INFO: Waiting for pod pod-configmaps-d07455d2-2f32-4a57-8513-431cb19635e8 to disappear Apr 20 00:14:54.720: INFO: Pod pod-configmaps-d07455d2-2f32-4a57-8513-431cb19635e8 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:14:54.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3312" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1962,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:14:54.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-e33e3430-9357-4d16-897e-4a52810c50eb STEP: Creating a pod to test consume secrets Apr 20 00:14:54.798: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-99ae27e8-94f1-4673-b499-47f98be59106" in namespace "projected-750" to be "Succeeded or Failed" Apr 20 00:14:54.801: INFO: Pod "pod-projected-secrets-99ae27e8-94f1-4673-b499-47f98be59106": Phase="Pending", Reason="", readiness=false. Elapsed: 3.01801ms Apr 20 00:14:56.811: INFO: Pod "pod-projected-secrets-99ae27e8-94f1-4673-b499-47f98be59106": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013127825s Apr 20 00:14:58.815: INFO: Pod "pod-projected-secrets-99ae27e8-94f1-4673-b499-47f98be59106": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017525954s STEP: Saw pod success Apr 20 00:14:58.815: INFO: Pod "pod-projected-secrets-99ae27e8-94f1-4673-b499-47f98be59106" satisfied condition "Succeeded or Failed" Apr 20 00:14:58.819: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-99ae27e8-94f1-4673-b499-47f98be59106 container projected-secret-volume-test: STEP: delete the pod Apr 20 00:14:59.025: INFO: Waiting for pod pod-projected-secrets-99ae27e8-94f1-4673-b499-47f98be59106 to disappear Apr 20 00:14:59.049: INFO: Pod pod-projected-secrets-99ae27e8-94f1-4673-b499-47f98be59106 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:14:59.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-750" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1999,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:14:59.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 20 00:14:59.130: INFO: Waiting up to 5m0s for pod "downward-api-2eceb951-a645-4e61-b223-a912badfa3a5" in namespace "downward-api-2896" to be "Succeeded or Failed" Apr 20 00:14:59.134: INFO: Pod "downward-api-2eceb951-a645-4e61-b223-a912badfa3a5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.40743ms Apr 20 00:15:01.137: INFO: Pod "downward-api-2eceb951-a645-4e61-b223-a912badfa3a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006901551s Apr 20 00:15:03.141: INFO: Pod "downward-api-2eceb951-a645-4e61-b223-a912badfa3a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010861531s STEP: Saw pod success Apr 20 00:15:03.141: INFO: Pod "downward-api-2eceb951-a645-4e61-b223-a912badfa3a5" satisfied condition "Succeeded or Failed" Apr 20 00:15:03.144: INFO: Trying to get logs from node latest-worker2 pod downward-api-2eceb951-a645-4e61-b223-a912badfa3a5 container dapi-container: STEP: delete the pod Apr 20 00:15:03.196: INFO: Waiting for pod downward-api-2eceb951-a645-4e61-b223-a912badfa3a5 to disappear Apr 20 00:15:03.265: INFO: Pod downward-api-2eceb951-a645-4e61-b223-a912badfa3a5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:15:03.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2896" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":2009,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:15:03.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 20 00:15:03.322: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad8e0367-65de-4369-b173-2a69116c548f" in namespace "downward-api-7186" to be "Succeeded or Failed" Apr 20 00:15:03.326: INFO: Pod "downwardapi-volume-ad8e0367-65de-4369-b173-2a69116c548f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.455857ms Apr 20 00:15:05.330: INFO: Pod "downwardapi-volume-ad8e0367-65de-4369-b173-2a69116c548f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008253668s Apr 20 00:15:07.335: INFO: Pod "downwardapi-volume-ad8e0367-65de-4369-b173-2a69116c548f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013497365s STEP: Saw pod success Apr 20 00:15:07.335: INFO: Pod "downwardapi-volume-ad8e0367-65de-4369-b173-2a69116c548f" satisfied condition "Succeeded or Failed" Apr 20 00:15:07.338: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ad8e0367-65de-4369-b173-2a69116c548f container client-container: STEP: delete the pod Apr 20 00:15:07.370: INFO: Waiting for pod downwardapi-volume-ad8e0367-65de-4369-b173-2a69116c548f to disappear Apr 20 00:15:07.382: INFO: Pod downwardapi-volume-ad8e0367-65de-4369-b173-2a69116c548f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:15:07.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7186" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":2026,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:15:07.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:15:07.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3805" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":106,"skipped":2052,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:15:07.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 20 00:15:08.175: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 20 00:15:10.186: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938508, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938508, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938508, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938508, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 20 00:15:13.237: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:15:13.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3632" for this suite. STEP: Destroying namespace "webhook-3632-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.763 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":107,"skipped":2059,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:15:13.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-bcd9b9e5-7016-4bb6-884a-1d84df45f533 STEP: Creating a pod to test consume configMaps Apr 20 00:15:14.410: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5f595fac-ea27-44a3-920a-8e1315d050cd" in namespace "projected-3149" to be "Succeeded or Failed" Apr 20 00:15:14.620: INFO: Pod "pod-projected-configmaps-5f595fac-ea27-44a3-920a-8e1315d050cd": Phase="Pending", Reason="", readiness=false. Elapsed: 209.450808ms Apr 20 00:15:16.624: INFO: Pod "pod-projected-configmaps-5f595fac-ea27-44a3-920a-8e1315d050cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213771084s Apr 20 00:15:18.627: INFO: Pod "pod-projected-configmaps-5f595fac-ea27-44a3-920a-8e1315d050cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.216512376s STEP: Saw pod success Apr 20 00:15:18.627: INFO: Pod "pod-projected-configmaps-5f595fac-ea27-44a3-920a-8e1315d050cd" satisfied condition "Succeeded or Failed" Apr 20 00:15:18.629: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-5f595fac-ea27-44a3-920a-8e1315d050cd container projected-configmap-volume-test: STEP: delete the pod Apr 20 00:15:18.679: INFO: Waiting for pod pod-projected-configmaps-5f595fac-ea27-44a3-920a-8e1315d050cd to disappear Apr 20 00:15:18.687: INFO: Pod pod-projected-configmaps-5f595fac-ea27-44a3-920a-8e1315d050cd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:15:18.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3149" for this suite. • [SLOW TEST:5.349 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":2076,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:15:18.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 20 00:15:18.819: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6548 /api/v1/namespaces/watch-6548/configmaps/e2e-watch-test-configmap-a 4ccc6991-ecac-4d9d-bede-a9fe0178eb52 9461469 0 2020-04-20 00:15:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 20 00:15:18.819: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6548 /api/v1/namespaces/watch-6548/configmaps/e2e-watch-test-configmap-a 4ccc6991-ecac-4d9d-bede-a9fe0178eb52 9461469 0 2020-04-20 00:15:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 20 00:15:28.827: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6548 /api/v1/namespaces/watch-6548/configmaps/e2e-watch-test-configmap-a 4ccc6991-ecac-4d9d-bede-a9fe0178eb52 9461511 0 2020-04-20 00:15:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 20 00:15:28.827: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6548 /api/v1/namespaces/watch-6548/configmaps/e2e-watch-test-configmap-a 4ccc6991-ecac-4d9d-bede-a9fe0178eb52 9461511 0 2020-04-20 00:15:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 20 00:15:38.835: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6548 /api/v1/namespaces/watch-6548/configmaps/e2e-watch-test-configmap-a 4ccc6991-ecac-4d9d-bede-a9fe0178eb52 9461543 0 2020-04-20 00:15:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 20 00:15:38.836: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6548 /api/v1/namespaces/watch-6548/configmaps/e2e-watch-test-configmap-a 4ccc6991-ecac-4d9d-bede-a9fe0178eb52 9461543 0 2020-04-20 00:15:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 20 00:15:48.845: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6548 /api/v1/namespaces/watch-6548/configmaps/e2e-watch-test-configmap-a 4ccc6991-ecac-4d9d-bede-a9fe0178eb52 9461573 0 2020-04-20 00:15:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 20 00:15:48.845: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6548 /api/v1/namespaces/watch-6548/configmaps/e2e-watch-test-configmap-a 4ccc6991-ecac-4d9d-bede-a9fe0178eb52 9461573 0 2020-04-20 00:15:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 20 00:15:58.852: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6548 /api/v1/namespaces/watch-6548/configmaps/e2e-watch-test-configmap-b f3a27b57-a3fe-417f-9bdf-444faf43ff84 9461603 0 2020-04-20 00:15:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 20 00:15:58.852: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6548 /api/v1/namespaces/watch-6548/configmaps/e2e-watch-test-configmap-b f3a27b57-a3fe-417f-9bdf-444faf43ff84 9461603 0 2020-04-20 00:15:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 20 00:16:08.860: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6548 /api/v1/namespaces/watch-6548/configmaps/e2e-watch-test-configmap-b f3a27b57-a3fe-417f-9bdf-444faf43ff84 9461633 0 2020-04-20 00:15:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 20 00:16:08.860: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6548 /api/v1/namespaces/watch-6548/configmaps/e2e-watch-test-configmap-b f3a27b57-a3fe-417f-9bdf-444faf43ff84 9461633 0 2020-04-20 00:15:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:16:18.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6548" for this suite. • [SLOW TEST:60.175 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":109,"skipped":2112,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:16:18.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-8606/configmap-test-8b81f631-bdfb-4ab2-a5c3-4c61c86fa340 STEP: Creating a pod to test consume configMaps Apr 20 00:16:19.098: INFO: Waiting up to 5m0s for pod "pod-configmaps-8cf78237-218b-402a-8bfc-861c0eb45cd5" in namespace "configmap-8606" to be "Succeeded or Failed" Apr 20 00:16:19.114: INFO: Pod "pod-configmaps-8cf78237-218b-402a-8bfc-861c0eb45cd5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.977086ms Apr 20 00:16:21.117: INFO: Pod "pod-configmaps-8cf78237-218b-402a-8bfc-861c0eb45cd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019550232s Apr 20 00:16:23.122: INFO: Pod "pod-configmaps-8cf78237-218b-402a-8bfc-861c0eb45cd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023988954s STEP: Saw pod success Apr 20 00:16:23.122: INFO: Pod "pod-configmaps-8cf78237-218b-402a-8bfc-861c0eb45cd5" satisfied condition "Succeeded or Failed" Apr 20 00:16:23.125: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-8cf78237-218b-402a-8bfc-861c0eb45cd5 container env-test: STEP: delete the pod Apr 20 00:16:23.145: INFO: Waiting for pod pod-configmaps-8cf78237-218b-402a-8bfc-861c0eb45cd5 to disappear Apr 20 00:16:23.156: INFO: Pod pod-configmaps-8cf78237-218b-402a-8bfc-861c0eb45cd5 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:16:23.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8606" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":2118,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:16:23.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 20 00:16:23.266: INFO: Waiting up to 5m0s for pod "pod-0cc53680-067e-4517-b743-a55586c91c60" in namespace "emptydir-7072" to be "Succeeded or Failed" Apr 20 00:16:23.297: INFO: Pod "pod-0cc53680-067e-4517-b743-a55586c91c60": Phase="Pending", Reason="", readiness=false. Elapsed: 31.728309ms Apr 20 00:16:25.301: INFO: Pod "pod-0cc53680-067e-4517-b743-a55586c91c60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035675061s Apr 20 00:16:27.306: INFO: Pod "pod-0cc53680-067e-4517-b743-a55586c91c60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040009157s STEP: Saw pod success Apr 20 00:16:27.306: INFO: Pod "pod-0cc53680-067e-4517-b743-a55586c91c60" satisfied condition "Succeeded or Failed" Apr 20 00:16:27.308: INFO: Trying to get logs from node latest-worker2 pod pod-0cc53680-067e-4517-b743-a55586c91c60 container test-container: STEP: delete the pod Apr 20 00:16:27.367: INFO: Waiting for pod pod-0cc53680-067e-4517-b743-a55586c91c60 to disappear Apr 20 00:16:27.378: INFO: Pod pod-0cc53680-067e-4517-b743-a55586c91c60 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:16:27.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7072" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":2159,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:16:27.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 20 00:16:31.618: INFO: &Pod{ObjectMeta:{send-events-9446ac18-63ae-47a8-858a-fffb76d0edff events-5173 /api/v1/namespaces/events-5173/pods/send-events-9446ac18-63ae-47a8-858a-fffb76d0edff bfd3d4b7-bb74-4322-9ca8-95f818c295ba 9461757 0 2020-04-20 00:16:27 +0000 UTC map[name:foo time:497466544] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nl47w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nl47w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nl47w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:16:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:16:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:16:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:16:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.184,StartTime:2020-04-20 00:16:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-20 00:16:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://88fdf002d2864b708c7b3b19e93c5f94e4f944fa52fdc2813df2670e33fe9eb4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 20 00:16:33.623: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 20 00:16:35.628: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:16:35.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5173" for this suite. • [SLOW TEST:8.273 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":112,"skipped":2166,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:16:35.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-20a04139-d41f-4071-94da-222180562c1e STEP: Creating a pod to test consume secrets Apr 20 00:16:35.749: INFO: Waiting up to 5m0s for pod "pod-secrets-14f021c6-39a7-4c68-8626-203f12eb20e2" in namespace "secrets-4220" to be "Succeeded or Failed" Apr 20 00:16:35.752: INFO: Pod "pod-secrets-14f021c6-39a7-4c68-8626-203f12eb20e2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.012137ms Apr 20 00:16:37.962: INFO: Pod "pod-secrets-14f021c6-39a7-4c68-8626-203f12eb20e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21280946s Apr 20 00:16:39.966: INFO: Pod "pod-secrets-14f021c6-39a7-4c68-8626-203f12eb20e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.216773484s STEP: Saw pod success Apr 20 00:16:39.966: INFO: Pod "pod-secrets-14f021c6-39a7-4c68-8626-203f12eb20e2" satisfied condition "Succeeded or Failed" Apr 20 00:16:39.969: INFO: Trying to get logs from node latest-worker pod pod-secrets-14f021c6-39a7-4c68-8626-203f12eb20e2 container secret-volume-test: STEP: delete the pod Apr 20 00:16:40.214: INFO: Waiting for pod pod-secrets-14f021c6-39a7-4c68-8626-203f12eb20e2 to disappear Apr 20 00:16:40.220: INFO: Pod pod-secrets-14f021c6-39a7-4c68-8626-203f12eb20e2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:16:40.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4220" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":2187,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:16:40.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-05fe506b-8700-4a1a-a761-9b6539901343 STEP: Creating a pod to test consume secrets Apr 20 00:16:40.333: INFO: Waiting up to 5m0s for pod "pod-secrets-2dfba77c-0996-4ecf-8560-40b597d83bb0" in namespace "secrets-1047" to be "Succeeded or Failed" Apr 20 00:16:40.368: INFO: Pod "pod-secrets-2dfba77c-0996-4ecf-8560-40b597d83bb0": Phase="Pending", Reason="", readiness=false. Elapsed: 35.080482ms Apr 20 00:16:42.372: INFO: Pod "pod-secrets-2dfba77c-0996-4ecf-8560-40b597d83bb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039275931s Apr 20 00:16:44.704: INFO: Pod "pod-secrets-2dfba77c-0996-4ecf-8560-40b597d83bb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.370676912s STEP: Saw pod success Apr 20 00:16:44.704: INFO: Pod "pod-secrets-2dfba77c-0996-4ecf-8560-40b597d83bb0" satisfied condition "Succeeded or Failed" Apr 20 00:16:44.706: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2dfba77c-0996-4ecf-8560-40b597d83bb0 container secret-volume-test: STEP: delete the pod Apr 20 00:16:45.143: INFO: Waiting for pod pod-secrets-2dfba77c-0996-4ecf-8560-40b597d83bb0 to disappear Apr 20 00:16:45.156: INFO: Pod pod-secrets-2dfba77c-0996-4ecf-8560-40b597d83bb0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:16:45.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1047" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":2191,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:16:45.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2840 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-2840 Apr 20 00:16:45.234: INFO: Found 0 stateful pods, waiting for 1 Apr 20 00:16:55.239: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 20 00:16:55.256: INFO: Deleting all statefulset in ns statefulset-2840 Apr 20 00:16:55.274: INFO: Scaling statefulset ss to 0 Apr 20 00:17:05.326: INFO: Waiting for statefulset status.replicas updated to 0 Apr 20 00:17:05.330: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:17:05.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2840" for this suite. • [SLOW TEST:20.188 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":115,"skipped":2204,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:17:05.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 20 00:17:05.425: INFO: Waiting up to 5m0s for pod "pod-228f0be6-ff15-4272-98f4-cde32f7e38ac" in namespace "emptydir-7949" to be "Succeeded or Failed" Apr 20 00:17:05.440: INFO: Pod "pod-228f0be6-ff15-4272-98f4-cde32f7e38ac": Phase="Pending", Reason="", readiness=false. Elapsed: 15.313805ms Apr 20 00:17:07.476: INFO: Pod "pod-228f0be6-ff15-4272-98f4-cde32f7e38ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050869916s Apr 20 00:17:09.480: INFO: Pod "pod-228f0be6-ff15-4272-98f4-cde32f7e38ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054649795s STEP: Saw pod success Apr 20 00:17:09.480: INFO: Pod "pod-228f0be6-ff15-4272-98f4-cde32f7e38ac" satisfied condition "Succeeded or Failed" Apr 20 00:17:09.482: INFO: Trying to get logs from node latest-worker pod pod-228f0be6-ff15-4272-98f4-cde32f7e38ac container test-container: STEP: delete the pod Apr 20 00:17:09.558: INFO: Waiting for pod pod-228f0be6-ff15-4272-98f4-cde32f7e38ac to disappear Apr 20 00:17:09.564: INFO: Pod pod-228f0be6-ff15-4272-98f4-cde32f7e38ac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:17:09.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7949" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":2208,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:17:09.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:17:13.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9069" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":2212,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:17:13.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 20 00:17:13.727: INFO: Created pod &Pod{ObjectMeta:{dns-3543 dns-3543 /api/v1/namespaces/dns-3543/pods/dns-3543 5c3f4395-c85d-47e4-b308-746bfdaba854 9462065 0 2020-04-20 00:17:13 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6c8sn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6c8sn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6c8sn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:17:13.738: INFO: The status of Pod dns-3543 is Pending, waiting for it to be Running (with Ready = true) Apr 20 00:17:15.745: INFO: The status of Pod dns-3543 is Pending, waiting for it to be Running (with Ready = true) Apr 20 00:17:17.742: INFO: The status of Pod dns-3543 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 20 00:17:17.742: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3543 PodName:dns-3543 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:17:17.742: INFO: >>> kubeConfig: /root/.kube/config I0420 00:17:17.770774 8 log.go:172] (0xc002aee2c0) (0xc000694780) Create stream I0420 00:17:17.770808 8 log.go:172] (0xc002aee2c0) (0xc000694780) Stream added, broadcasting: 1 I0420 00:17:17.772746 8 log.go:172] (0xc002aee2c0) Reply frame received for 1 I0420 00:17:17.772772 8 log.go:172] (0xc002aee2c0) (0xc001f4f180) Create stream I0420 00:17:17.772782 8 log.go:172] (0xc002aee2c0) (0xc001f4f180) Stream added, broadcasting: 3 I0420 00:17:17.773884 8 log.go:172] (0xc002aee2c0) Reply frame received for 3 I0420 00:17:17.773911 8 log.go:172] (0xc002aee2c0) (0xc000694be0) Create stream I0420 00:17:17.773920 8 log.go:172] (0xc002aee2c0) (0xc000694be0) Stream added, broadcasting: 5 I0420 00:17:17.774731 8 log.go:172] (0xc002aee2c0) Reply frame received for 5 I0420 00:17:17.852644 8 log.go:172] (0xc002aee2c0) Data frame received for 3 I0420 00:17:17.852688 8 log.go:172] (0xc001f4f180) (3) Data frame handling I0420 00:17:17.852713 8 log.go:172] (0xc001f4f180) (3) Data frame sent I0420 00:17:17.854068 8 log.go:172] (0xc002aee2c0) Data frame received for 5 I0420 00:17:17.854100 8 log.go:172] (0xc000694be0) (5) Data frame handling I0420 00:17:17.854138 8 log.go:172] (0xc002aee2c0) Data frame received for 3 I0420 00:17:17.854161 8 log.go:172] (0xc001f4f180) (3) Data frame handling I0420 00:17:17.856077 8 log.go:172] (0xc002aee2c0) Data frame received for 1 I0420 00:17:17.856103 8 log.go:172] (0xc000694780) (1) Data frame handling I0420 00:17:17.856120 8 log.go:172] (0xc000694780) (1) Data frame sent I0420 00:17:17.856132 8 log.go:172] (0xc002aee2c0) (0xc000694780) Stream removed, broadcasting: 1 I0420 00:17:17.856155 8 log.go:172] (0xc002aee2c0) Go away received I0420 00:17:17.856285 8 log.go:172] (0xc002aee2c0) (0xc000694780) Stream removed, broadcasting: 1 I0420 00:17:17.856305 8 log.go:172] (0xc002aee2c0) (0xc001f4f180) Stream removed, broadcasting: 3 I0420 00:17:17.856363 8 log.go:172] (0xc002aee2c0) (0xc000694be0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 20 00:17:17.856: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3543 PodName:dns-3543 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:17:17.856: INFO: >>> kubeConfig: /root/.kube/config I0420 00:17:17.887736 8 log.go:172] (0xc002c5a840) (0xc001a6c8c0) Create stream I0420 00:17:17.887761 8 log.go:172] (0xc002c5a840) (0xc001a6c8c0) Stream added, broadcasting: 1 I0420 00:17:17.890303 8 log.go:172] (0xc002c5a840) Reply frame received for 1 I0420 00:17:17.890351 8 log.go:172] (0xc002c5a840) (0xc001a6cb40) Create stream I0420 00:17:17.890368 8 log.go:172] (0xc002c5a840) (0xc001a6cb40) Stream added, broadcasting: 3 I0420 00:17:17.891370 8 log.go:172] (0xc002c5a840) Reply frame received for 3 I0420 00:17:17.891420 8 log.go:172] (0xc002c5a840) (0xc000695540) Create stream I0420 00:17:17.891443 8 log.go:172] (0xc002c5a840) (0xc000695540) Stream added, broadcasting: 5 I0420 00:17:17.892401 8 log.go:172] (0xc002c5a840) Reply frame received for 5 I0420 00:17:17.959775 8 log.go:172] (0xc002c5a840) Data frame received for 3 I0420 00:17:17.959801 8 log.go:172] (0xc001a6cb40) (3) Data frame handling I0420 00:17:17.959816 8 log.go:172] (0xc001a6cb40) (3) Data frame sent I0420 00:17:17.960788 8 log.go:172] (0xc002c5a840) Data frame received for 3 I0420 00:17:17.960820 8 log.go:172] (0xc001a6cb40) (3) Data frame handling I0420 00:17:17.960838 8 log.go:172] (0xc002c5a840) Data frame received for 5 I0420 00:17:17.960844 8 log.go:172] (0xc000695540) (5) Data frame handling I0420 00:17:17.962823 8 log.go:172] (0xc002c5a840) Data frame received for 1 I0420 00:17:17.962849 8 log.go:172] (0xc001a6c8c0) (1) Data frame handling I0420 00:17:17.962879 8 log.go:172] (0xc001a6c8c0) (1) Data frame sent I0420 00:17:17.962910 8 log.go:172] (0xc002c5a840) (0xc001a6c8c0) Stream removed, broadcasting: 1 I0420 00:17:17.962969 8 log.go:172] (0xc002c5a840) Go away received I0420 00:17:17.963085 8 log.go:172] (0xc002c5a840) (0xc001a6c8c0) Stream removed, broadcasting: 1 I0420 00:17:17.963112 8 log.go:172] (0xc002c5a840) (0xc001a6cb40) Stream removed, broadcasting: 3 I0420 00:17:17.963131 8 log.go:172] (0xc002c5a840) (0xc000695540) Stream removed, broadcasting: 5 Apr 20 00:17:17.963: INFO: Deleting pod dns-3543... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:17:17.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3543" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":118,"skipped":2281,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:17:18.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 20 00:17:18.455: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:17:18.503: INFO: Number of nodes with available pods: 0 Apr 20 00:17:18.503: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:17:19.507: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:17:19.510: INFO: Number of nodes with available pods: 0 Apr 20 00:17:19.510: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:17:20.508: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:17:20.511: INFO: Number of nodes with available pods: 0 Apr 20 00:17:20.511: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:17:21.507: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:17:21.510: INFO: Number of nodes with available pods: 1 Apr 20 00:17:21.510: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:17:22.508: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:17:22.512: INFO: Number of nodes with available pods: 2 Apr 20 00:17:22.512: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 20 00:17:22.541: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:17:22.544: INFO: Number of nodes with available pods: 1 Apr 20 00:17:22.544: INFO: Node latest-worker2 is running more than one daemon pod Apr 20 00:17:23.549: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:17:23.553: INFO: Number of nodes with available pods: 1 Apr 20 00:17:23.553: INFO: Node latest-worker2 is running more than one daemon pod Apr 20 00:17:24.549: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:17:24.553: INFO: Number of nodes with available pods: 1 Apr 20 00:17:24.553: INFO: Node latest-worker2 is running more than one daemon pod Apr 20 00:17:25.549: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:17:25.554: INFO: Number of nodes with available pods: 1 Apr 20 00:17:25.554: INFO: Node latest-worker2 is running more than one daemon pod Apr 20 00:17:26.549: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:17:26.553: INFO: Number of nodes with available pods: 1 Apr 20 00:17:26.553: INFO: Node latest-worker2 is running more than one daemon pod Apr 20 00:17:27.548: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:17:27.551: INFO: Number of nodes with available pods: 1 Apr 20 00:17:27.551: INFO: Node latest-worker2 is running more than one daemon pod Apr 20 00:17:28.549: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:17:28.553: INFO: Number of nodes with available pods: 2 Apr 20 00:17:28.553: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8201, will wait for the garbage collector to delete the pods Apr 20 00:17:28.614: INFO: Deleting DaemonSet.extensions daemon-set took: 6.430276ms Apr 20 00:17:28.715: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.18176ms Apr 20 00:17:43.017: INFO: Number of nodes with available pods: 0 Apr 20 00:17:43.017: INFO: Number of running nodes: 0, number of available pods: 0 Apr 20 00:17:43.019: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8201/daemonsets","resourceVersion":"9462245"},"items":null} Apr 20 00:17:43.021: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8201/pods","resourceVersion":"9462245"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:17:43.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8201" for this suite. • [SLOW TEST:25.025 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":119,"skipped":2357,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:17:43.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 20 00:17:43.083: INFO: namespace kubectl-3685 Apr 20 00:17:43.083: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3685' Apr 20 00:17:43.385: INFO: stderr: "" Apr 20 00:17:43.385: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 20 00:17:44.391: INFO: Selector matched 1 pods for map[app:agnhost] Apr 20 00:17:44.391: INFO: Found 0 / 1 Apr 20 00:17:45.389: INFO: Selector matched 1 pods for map[app:agnhost] Apr 20 00:17:45.389: INFO: Found 0 / 1 Apr 20 00:17:46.389: INFO: Selector matched 1 pods for map[app:agnhost] Apr 20 00:17:46.389: INFO: Found 1 / 1 Apr 20 00:17:46.389: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 20 00:17:46.393: INFO: Selector matched 1 pods for map[app:agnhost] Apr 20 00:17:46.393: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 20 00:17:46.393: INFO: wait on agnhost-master startup in kubectl-3685 Apr 20 00:17:46.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-8bjjb agnhost-master --namespace=kubectl-3685' Apr 20 00:17:46.508: INFO: stderr: "" Apr 20 00:17:46.508: INFO: stdout: "Paused\n" STEP: exposing RC Apr 20 00:17:46.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3685' Apr 20 00:17:46.672: INFO: stderr: "" Apr 20 00:17:46.672: INFO: stdout: "service/rm2 exposed\n" Apr 20 00:17:46.676: INFO: Service rm2 in namespace kubectl-3685 found. STEP: exposing service Apr 20 00:17:48.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3685' Apr 20 00:17:49.816: INFO: stderr: "" Apr 20 00:17:49.816: INFO: stdout: "service/rm3 exposed\n" Apr 20 00:17:49.821: INFO: Service rm3 in namespace kubectl-3685 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:17:51.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3685" for this suite. • [SLOW TEST:8.799 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":120,"skipped":2371,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:17:51.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:17:51.906: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 20 00:17:56.909: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 20 00:17:56.909: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 20 00:17:58.913: INFO: Creating deployment "test-rollover-deployment" Apr 20 00:17:58.926: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 20 00:18:00.932: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 20 00:18:00.938: INFO: Ensure that both replica sets have 1 created replica Apr 20 00:18:00.943: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 20 00:18:00.950: INFO: Updating deployment test-rollover-deployment Apr 20 00:18:00.950: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 20 00:18:02.986: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 20 00:18:02.990: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 20 00:18:03.037: INFO: all replica sets need to contain the pod-template-hash label Apr 20 00:18:03.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938681, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 00:18:05.045: INFO: all replica sets need to contain the pod-template-hash label Apr 20 00:18:05.045: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938684, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 00:18:07.045: INFO: all replica sets need to contain the pod-template-hash label Apr 20 00:18:07.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938684, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 00:18:09.043: INFO: all replica sets need to contain the pod-template-hash label Apr 20 00:18:09.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938684, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 00:18:11.045: INFO: all replica sets need to contain the pod-template-hash label Apr 20 00:18:11.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938684, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 00:18:13.043: INFO: all replica sets need to contain the pod-template-hash label Apr 20 00:18:13.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938684, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938678, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 00:18:15.045: INFO: Apr 20 00:18:15.045: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 20 00:18:15.053: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1276 /apis/apps/v1/namespaces/deployment-1276/deployments/test-rollover-deployment 13ea82fa-db84-4d39-a73e-0c4c52448885 9462484 2 2020-04-20 00:17:58 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a0df58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-20 00:17:58 +0000 UTC,LastTransitionTime:2020-04-20 00:17:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-04-20 00:18:14 +0000 UTC,LastTransitionTime:2020-04-20 00:17:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 20 00:18:15.056: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-1276 /apis/apps/v1/namespaces/deployment-1276/replicasets/test-rollover-deployment-78df7bc796 f0c42320-8e51-4cb2-8cf3-270cfc7f237d 9462473 2 2020-04-20 00:18:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 13ea82fa-db84-4d39-a73e-0c4c52448885 0xc002214777 0xc002214778}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0022147e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 20 00:18:15.056: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 20 00:18:15.056: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1276 /apis/apps/v1/namespaces/deployment-1276/replicasets/test-rollover-controller 9f8d9b1c-9c9c-4e48-b5d0-1e540ec737a3 9462482 2 2020-04-20 00:17:51 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 13ea82fa-db84-4d39-a73e-0c4c52448885 0xc002214627 0xc002214628}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0022146e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 20 00:18:15.057: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-1276 /apis/apps/v1/namespaces/deployment-1276/replicasets/test-rollover-deployment-f6c94f66c 754d0d95-7b3d-4d68-8c7e-54a4a9b15249 9462420 2 2020-04-20 00:17:58 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 13ea82fa-db84-4d39-a73e-0c4c52448885 0xc002214860 0xc002214861}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0022148d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 20 00:18:15.060: INFO: Pod "test-rollover-deployment-78df7bc796-7vz6c" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-7vz6c test-rollover-deployment-78df7bc796- deployment-1276 /api/v1/namespaces/deployment-1276/pods/test-rollover-deployment-78df7bc796-7vz6c 5ae1f462-1f46-4ff7-8a0e-1bbf1c7ee4ac 9462441 0 2020-04-20 00:18:01 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 f0c42320-8e51-4cb2-8cf3-270cfc7f237d 0xc002214e87 0xc002214e88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sndb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sndb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sndb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:18:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:18:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:18:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:18:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.103,StartTime:2020-04-20 00:18:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-20 00:18:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://b573952c4d2a12f9f7e536dd4486397c323bcbe4ae71a626e27d34e295ab5fe4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:18:15.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1276" for this suite. • [SLOW TEST:23.231 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":121,"skipped":2389,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:18:15.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:18:15.272: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 20 00:18:18.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8566 create -f -' Apr 20 00:18:23.329: INFO: stderr: "" Apr 20 00:18:23.329: INFO: stdout: "e2e-test-crd-publish-openapi-4315-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 20 00:18:23.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8566 delete e2e-test-crd-publish-openapi-4315-crds test-cr' Apr 20 00:18:23.440: INFO: stderr: "" Apr 20 00:18:23.440: INFO: stdout: "e2e-test-crd-publish-openapi-4315-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 20 00:18:23.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8566 apply -f -' Apr 20 00:18:23.676: INFO: stderr: "" Apr 20 00:18:23.676: INFO: stdout: "e2e-test-crd-publish-openapi-4315-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 20 00:18:23.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8566 delete e2e-test-crd-publish-openapi-4315-crds test-cr' Apr 20 00:18:23.775: INFO: stderr: "" Apr 20 00:18:23.775: INFO: stdout: "e2e-test-crd-publish-openapi-4315-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 20 00:18:23.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4315-crds' Apr 20 00:18:24.037: INFO: stderr: "" Apr 20 00:18:24.037: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4315-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:18:26.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8566" for this suite. • [SLOW TEST:11.874 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":122,"skipped":2389,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:18:26.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:18:27.029: INFO: (0) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 5.905582ms) Apr 20 00:18:27.032: INFO: (1) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.128132ms) Apr 20 00:18:27.035: INFO: (2) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.319917ms) Apr 20 00:18:27.053: INFO: (3) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 17.525767ms) Apr 20 00:18:27.056: INFO: (4) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.362085ms) Apr 20 00:18:27.060: INFO: (5) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.424248ms) Apr 20 00:18:27.063: INFO: (6) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.534654ms) Apr 20 00:18:27.067: INFO: (7) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.274532ms) Apr 20 00:18:27.070: INFO: (8) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.796869ms) Apr 20 00:18:27.072: INFO: (9) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.949664ms) Apr 20 00:18:27.076: INFO: (10) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.254023ms) Apr 20 00:18:27.079: INFO: (11) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.356258ms) Apr 20 00:18:27.082: INFO: (12) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.908465ms) Apr 20 00:18:27.085: INFO: (13) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.956844ms) Apr 20 00:18:27.089: INFO: (14) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.413776ms) Apr 20 00:18:27.092: INFO: (15) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.031906ms) Apr 20 00:18:27.095: INFO: (16) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.427898ms) Apr 20 00:18:27.098: INFO: (17) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.267646ms) Apr 20 00:18:27.101: INFO: (18) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.906076ms) Apr 20 00:18:27.104: INFO: (19) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.100964ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:18:27.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6183" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":123,"skipped":2406,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:18:27.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-21fdf1cf-c9f5-4c23-9f7f-f9c25ce43f88 in namespace container-probe-347 Apr 20 00:18:31.225: INFO: Started pod busybox-21fdf1cf-c9f5-4c23-9f7f-f9c25ce43f88 in namespace container-probe-347 STEP: checking the pod's current state and verifying that restartCount is present Apr 20 00:18:31.228: INFO: Initial restart count of pod busybox-21fdf1cf-c9f5-4c23-9f7f-f9c25ce43f88 is 0 Apr 20 00:19:25.343: INFO: Restart count of pod container-probe-347/busybox-21fdf1cf-c9f5-4c23-9f7f-f9c25ce43f88 is now 1 (54.11517021s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:19:25.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-347" for this suite. • [SLOW TEST:58.283 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2452,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:19:25.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 20 00:19:25.444: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc126f82-2510-4d50-ad23-45e010f44d93" in namespace "projected-6088" to be "Succeeded or Failed" Apr 20 00:19:25.448: INFO: Pod "downwardapi-volume-cc126f82-2510-4d50-ad23-45e010f44d93": Phase="Pending", Reason="", readiness=false. Elapsed: 3.985906ms Apr 20 00:19:27.452: INFO: Pod "downwardapi-volume-cc126f82-2510-4d50-ad23-45e010f44d93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00785426s Apr 20 00:19:29.457: INFO: Pod "downwardapi-volume-cc126f82-2510-4d50-ad23-45e010f44d93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012659184s STEP: Saw pod success Apr 20 00:19:29.457: INFO: Pod "downwardapi-volume-cc126f82-2510-4d50-ad23-45e010f44d93" satisfied condition "Succeeded or Failed" Apr 20 00:19:29.460: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-cc126f82-2510-4d50-ad23-45e010f44d93 container client-container: STEP: delete the pod Apr 20 00:19:29.494: INFO: Waiting for pod downwardapi-volume-cc126f82-2510-4d50-ad23-45e010f44d93 to disappear Apr 20 00:19:29.514: INFO: Pod downwardapi-volume-cc126f82-2510-4d50-ad23-45e010f44d93 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:19:29.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6088" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":2463,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:19:29.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Apr 20 00:19:29.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Apr 20 00:19:29.668: INFO: stderr: "" Apr 20 00:19:29.668: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:19:29.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3415" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":126,"skipped":2464,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:19:29.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:19:46.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9437" for this suite. • [SLOW TEST:17.238 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":127,"skipped":2478,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:19:46.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 20 00:19:57.008: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8993 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:19:57.008: INFO: >>> kubeConfig: /root/.kube/config I0420 00:19:57.046627 8 log.go:172] (0xc0054f68f0) (0xc0023bbd60) Create stream I0420 00:19:57.046655 8 log.go:172] (0xc0054f68f0) (0xc0023bbd60) Stream added, broadcasting: 1 I0420 00:19:57.049686 8 log.go:172] (0xc0054f68f0) Reply frame received for 1 I0420 00:19:57.049746 8 log.go:172] (0xc0054f68f0) (0xc0013ec140) Create stream I0420 00:19:57.049772 8 log.go:172] (0xc0054f68f0) (0xc0013ec140) Stream added, broadcasting: 3 I0420 00:19:57.051054 8 log.go:172] (0xc0054f68f0) Reply frame received for 3 I0420 00:19:57.051107 8 log.go:172] (0xc0054f68f0) (0xc0023bbe00) Create stream I0420 00:19:57.051135 8 log.go:172] (0xc0054f68f0) (0xc0023bbe00) Stream added, broadcasting: 5 I0420 00:19:57.052524 8 log.go:172] (0xc0054f68f0) Reply frame received for 5 I0420 00:19:57.141532 8 log.go:172] (0xc0054f68f0) Data frame received for 3 I0420 00:19:57.141565 8 log.go:172] (0xc0013ec140) (3) Data frame handling I0420 00:19:57.141579 8 log.go:172] (0xc0013ec140) (3) Data frame sent I0420 00:19:57.141586 8 log.go:172] (0xc0054f68f0) Data frame received for 3 I0420 00:19:57.141591 8 log.go:172] (0xc0013ec140) (3) Data frame handling I0420 00:19:57.141705 8 log.go:172] (0xc0054f68f0) Data frame received for 5 I0420 00:19:57.141746 8 log.go:172] (0xc0023bbe00) (5) Data frame handling I0420 00:19:57.143315 8 log.go:172] (0xc0054f68f0) Data frame received for 1 I0420 00:19:57.143336 8 log.go:172] (0xc0023bbd60) (1) Data frame handling I0420 00:19:57.143364 8 log.go:172] (0xc0023bbd60) (1) Data frame sent I0420 00:19:57.143393 8 log.go:172] (0xc0054f68f0) (0xc0023bbd60) Stream removed, broadcasting: 1 I0420 00:19:57.143514 8 log.go:172] (0xc0054f68f0) (0xc0023bbd60) Stream removed, broadcasting: 1 I0420 00:19:57.143543 8 log.go:172] (0xc0054f68f0) (0xc0013ec140) Stream removed, broadcasting: 3 I0420 00:19:57.143559 8 log.go:172] (0xc0054f68f0) (0xc0023bbe00) Stream removed, broadcasting: 5 Apr 20 00:19:57.143: INFO: Exec stderr: "" Apr 20 00:19:57.143: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8993 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0420 00:19:57.143619 8 log.go:172] (0xc0054f68f0) Go away received Apr 20 00:19:57.143: INFO: >>> kubeConfig: /root/.kube/config I0420 00:19:57.174589 8 log.go:172] (0xc0054f6f20) (0xc0018d6140) Create stream I0420 00:19:57.174617 8 log.go:172] (0xc0054f6f20) (0xc0018d6140) Stream added, broadcasting: 1 I0420 00:19:57.180752 8 log.go:172] (0xc0054f6f20) Reply frame received for 1 I0420 00:19:57.180806 8 log.go:172] (0xc0054f6f20) (0xc001a6d360) Create stream I0420 00:19:57.180842 8 log.go:172] (0xc0054f6f20) (0xc001a6d360) Stream added, broadcasting: 3 I0420 00:19:57.182393 8 log.go:172] (0xc0054f6f20) Reply frame received for 3 I0420 00:19:57.182432 8 log.go:172] (0xc0054f6f20) (0xc0018d61e0) Create stream I0420 00:19:57.182454 8 log.go:172] (0xc0054f6f20) (0xc0018d61e0) Stream added, broadcasting: 5 I0420 00:19:57.183771 8 log.go:172] (0xc0054f6f20) Reply frame received for 5 I0420 00:19:57.259682 8 log.go:172] (0xc0054f6f20) Data frame received for 3 I0420 00:19:57.259749 8 log.go:172] (0xc001a6d360) (3) Data frame handling I0420 00:19:57.259774 8 log.go:172] (0xc001a6d360) (3) Data frame sent I0420 00:19:57.259802 8 log.go:172] (0xc0054f6f20) Data frame received for 5 I0420 00:19:57.259836 8 log.go:172] (0xc0018d61e0) (5) Data frame handling I0420 00:19:57.259858 8 log.go:172] (0xc0054f6f20) Data frame received for 3 I0420 00:19:57.259867 8 log.go:172] (0xc001a6d360) (3) Data frame handling I0420 00:19:57.261970 8 log.go:172] (0xc0054f6f20) Data frame received for 1 I0420 00:19:57.262024 8 log.go:172] (0xc0018d6140) (1) Data frame handling I0420 00:19:57.262078 8 log.go:172] (0xc0018d6140) (1) Data frame sent I0420 00:19:57.262105 8 log.go:172] (0xc0054f6f20) (0xc0018d6140) Stream removed, broadcasting: 1 I0420 00:19:57.262135 8 log.go:172] (0xc0054f6f20) Go away received I0420 00:19:57.262269 8 log.go:172] (0xc0054f6f20) (0xc0018d6140) Stream removed, broadcasting: 1 I0420 00:19:57.262291 8 log.go:172] (0xc0054f6f20) (0xc001a6d360) Stream removed, broadcasting: 3 I0420 00:19:57.262304 8 log.go:172] (0xc0054f6f20) (0xc0018d61e0) Stream removed, broadcasting: 5 Apr 20 00:19:57.262: INFO: Exec stderr: "" Apr 20 00:19:57.262: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8993 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:19:57.262: INFO: >>> kubeConfig: /root/.kube/config I0420 00:19:57.297533 8 log.go:172] (0xc0054f7550) (0xc0018d6640) Create stream I0420 00:19:57.297559 8 log.go:172] (0xc0054f7550) (0xc0018d6640) Stream added, broadcasting: 1 I0420 00:19:57.298948 8 log.go:172] (0xc0054f7550) Reply frame received for 1 I0420 00:19:57.298985 8 log.go:172] (0xc0054f7550) (0xc0013ec1e0) Create stream I0420 00:19:57.298997 8 log.go:172] (0xc0054f7550) (0xc0013ec1e0) Stream added, broadcasting: 3 I0420 00:19:57.299885 8 log.go:172] (0xc0054f7550) Reply frame received for 3 I0420 00:19:57.299920 8 log.go:172] (0xc0054f7550) (0xc000f7bd60) Create stream I0420 00:19:57.299936 8 log.go:172] (0xc0054f7550) (0xc000f7bd60) Stream added, broadcasting: 5 I0420 00:19:57.300759 8 log.go:172] (0xc0054f7550) Reply frame received for 5 I0420 00:19:57.382970 8 log.go:172] (0xc0054f7550) Data frame received for 5 I0420 00:19:57.383010 8 log.go:172] (0xc000f7bd60) (5) Data frame handling I0420 00:19:57.383035 8 log.go:172] (0xc0054f7550) Data frame received for 3 I0420 00:19:57.383086 8 log.go:172] (0xc0013ec1e0) (3) Data frame handling I0420 00:19:57.383165 8 log.go:172] (0xc0013ec1e0) (3) Data frame sent I0420 00:19:57.383213 8 log.go:172] (0xc0054f7550) Data frame received for 3 I0420 00:19:57.383231 8 log.go:172] (0xc0013ec1e0) (3) Data frame handling I0420 00:19:57.384593 8 log.go:172] (0xc0054f7550) Data frame received for 1 I0420 00:19:57.384622 8 log.go:172] (0xc0018d6640) (1) Data frame handling I0420 00:19:57.384644 8 log.go:172] (0xc0018d6640) (1) Data frame sent I0420 00:19:57.384691 8 log.go:172] (0xc0054f7550) (0xc0018d6640) Stream removed, broadcasting: 1 I0420 00:19:57.384735 8 log.go:172] (0xc0054f7550) Go away received I0420 00:19:57.384863 8 log.go:172] (0xc0054f7550) (0xc0018d6640) Stream removed, broadcasting: 1 I0420 00:19:57.384893 8 log.go:172] (0xc0054f7550) (0xc0013ec1e0) Stream removed, broadcasting: 3 I0420 00:19:57.384917 8 log.go:172] (0xc0054f7550) (0xc000f7bd60) Stream removed, broadcasting: 5 Apr 20 00:19:57.384: INFO: Exec stderr: "" Apr 20 00:19:57.385: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8993 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:19:57.385: INFO: >>> kubeConfig: /root/.kube/config I0420 00:19:57.416811 8 log.go:172] (0xc0061bc370) (0xc0012128c0) Create stream I0420 00:19:57.416837 8 log.go:172] (0xc0061bc370) (0xc0012128c0) Stream added, broadcasting: 1 I0420 00:19:57.418627 8 log.go:172] (0xc0061bc370) Reply frame received for 1 I0420 00:19:57.418659 8 log.go:172] (0xc0061bc370) (0xc000f7bf40) Create stream I0420 00:19:57.418672 8 log.go:172] (0xc0061bc370) (0xc000f7bf40) Stream added, broadcasting: 3 I0420 00:19:57.419462 8 log.go:172] (0xc0061bc370) Reply frame received for 3 I0420 00:19:57.419483 8 log.go:172] (0xc0061bc370) (0xc001212b40) Create stream I0420 00:19:57.419495 8 log.go:172] (0xc0061bc370) (0xc001212b40) Stream added, broadcasting: 5 I0420 00:19:57.420422 8 log.go:172] (0xc0061bc370) Reply frame received for 5 I0420 00:19:57.472721 8 log.go:172] (0xc0061bc370) Data frame received for 3 I0420 00:19:57.472749 8 log.go:172] (0xc000f7bf40) (3) Data frame handling I0420 00:19:57.472757 8 log.go:172] (0xc000f7bf40) (3) Data frame sent I0420 00:19:57.472766 8 log.go:172] (0xc0061bc370) Data frame received for 3 I0420 00:19:57.472774 8 log.go:172] (0xc000f7bf40) (3) Data frame handling I0420 00:19:57.472782 8 log.go:172] (0xc0061bc370) Data frame received for 5 I0420 00:19:57.472788 8 log.go:172] (0xc001212b40) (5) Data frame handling I0420 00:19:57.474319 8 log.go:172] (0xc0061bc370) Data frame received for 1 I0420 00:19:57.474360 8 log.go:172] (0xc0012128c0) (1) Data frame handling I0420 00:19:57.474381 8 log.go:172] (0xc0012128c0) (1) Data frame sent I0420 00:19:57.474395 8 log.go:172] (0xc0061bc370) (0xc0012128c0) Stream removed, broadcasting: 1 I0420 00:19:57.474418 8 log.go:172] (0xc0061bc370) Go away received I0420 00:19:57.474619 8 log.go:172] (0xc0061bc370) (0xc0012128c0) Stream removed, broadcasting: 1 I0420 00:19:57.474646 8 log.go:172] (0xc0061bc370) (0xc000f7bf40) Stream removed, broadcasting: 3 I0420 00:19:57.474666 8 log.go:172] (0xc0061bc370) (0xc001212b40) Stream removed, broadcasting: 5 Apr 20 00:19:57.474: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 20 00:19:57.474: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8993 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:19:57.474: INFO: >>> kubeConfig: /root/.kube/config I0420 00:19:57.531173 8 log.go:172] (0xc005227760) (0xc001036aa0) Create stream I0420 00:19:57.531206 8 log.go:172] (0xc005227760) (0xc001036aa0) Stream added, broadcasting: 1 I0420 00:19:57.533509 8 log.go:172] (0xc005227760) Reply frame received for 1 I0420 00:19:57.533566 8 log.go:172] (0xc005227760) (0xc0018d6780) Create stream I0420 00:19:57.533589 8 log.go:172] (0xc005227760) (0xc0018d6780) Stream added, broadcasting: 3 I0420 00:19:57.534663 8 log.go:172] (0xc005227760) Reply frame received for 3 I0420 00:19:57.534704 8 log.go:172] (0xc005227760) (0xc0018d6820) Create stream I0420 00:19:57.534715 8 log.go:172] (0xc005227760) (0xc0018d6820) Stream added, broadcasting: 5 I0420 00:19:57.535617 8 log.go:172] (0xc005227760) Reply frame received for 5 I0420 00:19:57.603194 8 log.go:172] (0xc005227760) Data frame received for 5 I0420 00:19:57.603219 8 log.go:172] (0xc0018d6820) (5) Data frame handling I0420 00:19:57.603247 8 log.go:172] (0xc005227760) Data frame received for 3 I0420 00:19:57.603287 8 log.go:172] (0xc0018d6780) (3) Data frame handling I0420 00:19:57.603316 8 log.go:172] (0xc0018d6780) (3) Data frame sent I0420 00:19:57.603349 8 log.go:172] (0xc005227760) Data frame received for 3 I0420 00:19:57.603357 8 log.go:172] (0xc0018d6780) (3) Data frame handling I0420 00:19:57.604503 8 log.go:172] (0xc005227760) Data frame received for 1 I0420 00:19:57.604520 8 log.go:172] (0xc001036aa0) (1) Data frame handling I0420 00:19:57.604528 8 log.go:172] (0xc001036aa0) (1) Data frame sent I0420 00:19:57.604538 8 log.go:172] (0xc005227760) (0xc001036aa0) Stream removed, broadcasting: 1 I0420 00:19:57.604551 8 log.go:172] (0xc005227760) Go away received I0420 00:19:57.604652 8 log.go:172] (0xc005227760) (0xc001036aa0) Stream removed, broadcasting: 1 I0420 00:19:57.604671 8 log.go:172] (0xc005227760) (0xc0018d6780) Stream removed, broadcasting: 3 I0420 00:19:57.604678 8 log.go:172] (0xc005227760) (0xc0018d6820) Stream removed, broadcasting: 5 Apr 20 00:19:57.604: INFO: Exec stderr: "" Apr 20 00:19:57.604: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8993 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:19:57.604: INFO: >>> kubeConfig: /root/.kube/config I0420 00:19:57.651580 8 log.go:172] (0xc005227a20) (0xc001036be0) Create stream I0420 00:19:57.651611 8 log.go:172] (0xc005227a20) (0xc001036be0) Stream added, broadcasting: 1 I0420 00:19:57.654216 8 log.go:172] (0xc005227a20) Reply frame received for 1 I0420 00:19:57.654261 8 log.go:172] (0xc005227a20) (0xc001212c80) Create stream I0420 00:19:57.654282 8 log.go:172] (0xc005227a20) (0xc001212c80) Stream added, broadcasting: 3 I0420 00:19:57.655284 8 log.go:172] (0xc005227a20) Reply frame received for 3 I0420 00:19:57.655353 8 log.go:172] (0xc005227a20) (0xc001036dc0) Create stream I0420 00:19:57.655372 8 log.go:172] (0xc005227a20) (0xc001036dc0) Stream added, broadcasting: 5 I0420 00:19:57.656243 8 log.go:172] (0xc005227a20) Reply frame received for 5 I0420 00:19:57.713528 8 log.go:172] (0xc005227a20) Data frame received for 5 I0420 00:19:57.713572 8 log.go:172] (0xc001036dc0) (5) Data frame handling I0420 00:19:57.713604 8 log.go:172] (0xc005227a20) Data frame received for 3 I0420 00:19:57.713628 8 log.go:172] (0xc001212c80) (3) Data frame handling I0420 00:19:57.713649 8 log.go:172] (0xc001212c80) (3) Data frame sent I0420 00:19:57.713663 8 log.go:172] (0xc005227a20) Data frame received for 3 I0420 00:19:57.713678 8 log.go:172] (0xc001212c80) (3) Data frame handling I0420 00:19:57.714952 8 log.go:172] (0xc005227a20) Data frame received for 1 I0420 00:19:57.714972 8 log.go:172] (0xc001036be0) (1) Data frame handling I0420 00:19:57.714990 8 log.go:172] (0xc001036be0) (1) Data frame sent I0420 00:19:57.715000 8 log.go:172] (0xc005227a20) (0xc001036be0) Stream removed, broadcasting: 1 I0420 00:19:57.715014 8 log.go:172] (0xc005227a20) Go away received I0420 00:19:57.715135 8 log.go:172] (0xc005227a20) (0xc001036be0) Stream removed, broadcasting: 1 I0420 00:19:57.715150 8 log.go:172] (0xc005227a20) (0xc001212c80) Stream removed, broadcasting: 3 I0420 00:19:57.715158 8 log.go:172] (0xc005227a20) (0xc001036dc0) Stream removed, broadcasting: 5 Apr 20 00:19:57.715: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 20 00:19:57.715: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8993 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:19:57.715: INFO: >>> kubeConfig: /root/.kube/config I0420 00:19:57.747754 8 log.go:172] (0xc0067fe6e0) (0xc0013ecaa0) Create stream I0420 00:19:57.747784 8 log.go:172] (0xc0067fe6e0) (0xc0013ecaa0) Stream added, broadcasting: 1 I0420 00:19:57.749891 8 log.go:172] (0xc0067fe6e0) Reply frame received for 1 I0420 00:19:57.749929 8 log.go:172] (0xc0067fe6e0) (0xc0018d6aa0) Create stream I0420 00:19:57.749942 8 log.go:172] (0xc0067fe6e0) (0xc0018d6aa0) Stream added, broadcasting: 3 I0420 00:19:57.751005 8 log.go:172] (0xc0067fe6e0) Reply frame received for 3 I0420 00:19:57.751042 8 log.go:172] (0xc0067fe6e0) (0xc001036e60) Create stream I0420 00:19:57.751057 8 log.go:172] (0xc0067fe6e0) (0xc001036e60) Stream added, broadcasting: 5 I0420 00:19:57.752044 8 log.go:172] (0xc0067fe6e0) Reply frame received for 5 I0420 00:19:57.838204 8 log.go:172] (0xc0067fe6e0) Data frame received for 3 I0420 00:19:57.838262 8 log.go:172] (0xc0018d6aa0) (3) Data frame handling I0420 00:19:57.838305 8 log.go:172] (0xc0018d6aa0) (3) Data frame sent I0420 00:19:57.838325 8 log.go:172] (0xc0067fe6e0) Data frame received for 3 I0420 00:19:57.838343 8 log.go:172] (0xc0018d6aa0) (3) Data frame handling I0420 00:19:57.838373 8 log.go:172] (0xc0067fe6e0) Data frame received for 5 I0420 00:19:57.838392 8 log.go:172] (0xc001036e60) (5) Data frame handling I0420 00:19:57.839524 8 log.go:172] (0xc0067fe6e0) Data frame received for 1 I0420 00:19:57.839547 8 log.go:172] (0xc0013ecaa0) (1) Data frame handling I0420 00:19:57.839567 8 log.go:172] (0xc0013ecaa0) (1) Data frame sent I0420 00:19:57.839584 8 log.go:172] (0xc0067fe6e0) (0xc0013ecaa0) Stream removed, broadcasting: 1 I0420 00:19:57.839657 8 log.go:172] (0xc0067fe6e0) Go away received I0420 00:19:57.839724 8 log.go:172] (0xc0067fe6e0) (0xc0013ecaa0) Stream removed, broadcasting: 1 I0420 00:19:57.839763 8 log.go:172] (0xc0067fe6e0) (0xc0018d6aa0) Stream removed, broadcasting: 3 I0420 00:19:57.839780 8 log.go:172] (0xc0067fe6e0) (0xc001036e60) Stream removed, broadcasting: 5 Apr 20 00:19:57.839: INFO: Exec stderr: "" Apr 20 00:19:57.839: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8993 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:19:57.839: INFO: >>> kubeConfig: /root/.kube/config I0420 00:19:57.870462 8 log.go:172] (0xc0054f7b80) (0xc0018d6e60) Create stream I0420 00:19:57.870489 8 log.go:172] (0xc0054f7b80) (0xc0018d6e60) Stream added, broadcasting: 1 I0420 00:19:57.872598 8 log.go:172] (0xc0054f7b80) Reply frame received for 1 I0420 00:19:57.872632 8 log.go:172] (0xc0054f7b80) (0xc001036f00) Create stream I0420 00:19:57.872644 8 log.go:172] (0xc0054f7b80) (0xc001036f00) Stream added, broadcasting: 3 I0420 00:19:57.874092 8 log.go:172] (0xc0054f7b80) Reply frame received for 3 I0420 00:19:57.874120 8 log.go:172] (0xc0054f7b80) (0xc001037180) Create stream I0420 00:19:57.874140 8 log.go:172] (0xc0054f7b80) (0xc001037180) Stream added, broadcasting: 5 I0420 00:19:57.875171 8 log.go:172] (0xc0054f7b80) Reply frame received for 5 I0420 00:19:57.937326 8 log.go:172] (0xc0054f7b80) Data frame received for 5 I0420 00:19:57.937357 8 log.go:172] (0xc001037180) (5) Data frame handling I0420 00:19:57.937378 8 log.go:172] (0xc0054f7b80) Data frame received for 3 I0420 00:19:57.937385 8 log.go:172] (0xc001036f00) (3) Data frame handling I0420 00:19:57.937400 8 log.go:172] (0xc001036f00) (3) Data frame sent I0420 00:19:57.937495 8 log.go:172] (0xc0054f7b80) Data frame received for 3 I0420 00:19:57.937573 8 log.go:172] (0xc001036f00) (3) Data frame handling I0420 00:19:57.939075 8 log.go:172] (0xc0054f7b80) Data frame received for 1 I0420 00:19:57.939112 8 log.go:172] (0xc0018d6e60) (1) Data frame handling I0420 00:19:57.939139 8 log.go:172] (0xc0018d6e60) (1) Data frame sent I0420 00:19:57.939156 8 log.go:172] (0xc0054f7b80) (0xc0018d6e60) Stream removed, broadcasting: 1 I0420 00:19:57.939172 8 log.go:172] (0xc0054f7b80) Go away received I0420 00:19:57.939317 8 log.go:172] (0xc0054f7b80) (0xc0018d6e60) Stream removed, broadcasting: 1 I0420 00:19:57.939344 8 log.go:172] (0xc0054f7b80) (0xc001036f00) Stream removed, broadcasting: 3 I0420 00:19:57.939355 8 log.go:172] (0xc0054f7b80) (0xc001037180) Stream removed, broadcasting: 5 Apr 20 00:19:57.939: INFO: Exec stderr: "" Apr 20 00:19:57.939: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8993 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:19:57.939: INFO: >>> kubeConfig: /root/.kube/config I0420 00:19:57.969926 8 log.go:172] (0xc0067fed10) (0xc0013ecd20) Create stream I0420 00:19:57.969968 8 log.go:172] (0xc0067fed10) (0xc0013ecd20) Stream added, broadcasting: 1 I0420 00:19:57.972623 8 log.go:172] (0xc0067fed10) Reply frame received for 1 I0420 00:19:57.972686 8 log.go:172] (0xc0067fed10) (0xc0018d6fa0) Create stream I0420 00:19:57.972712 8 log.go:172] (0xc0067fed10) (0xc0018d6fa0) Stream added, broadcasting: 3 I0420 00:19:57.973936 8 log.go:172] (0xc0067fed10) Reply frame received for 3 I0420 00:19:57.973978 8 log.go:172] (0xc0067fed10) (0xc0018d7040) Create stream I0420 00:19:57.973992 8 log.go:172] (0xc0067fed10) (0xc0018d7040) Stream added, broadcasting: 5 I0420 00:19:57.974988 8 log.go:172] (0xc0067fed10) Reply frame received for 5 I0420 00:19:58.041573 8 log.go:172] (0xc0067fed10) Data frame received for 3 I0420 00:19:58.041634 8 log.go:172] (0xc0018d6fa0) (3) Data frame handling I0420 00:19:58.041679 8 log.go:172] (0xc0018d6fa0) (3) Data frame sent I0420 00:19:58.041935 8 log.go:172] (0xc0067fed10) Data frame received for 3 I0420 00:19:58.041980 8 log.go:172] (0xc0018d6fa0) (3) Data frame handling I0420 00:19:58.042026 8 log.go:172] (0xc0067fed10) Data frame received for 5 I0420 00:19:58.042067 8 log.go:172] (0xc0018d7040) (5) Data frame handling I0420 00:19:58.043131 8 log.go:172] (0xc0067fed10) Data frame received for 1 I0420 00:19:58.043150 8 log.go:172] (0xc0013ecd20) (1) Data frame handling I0420 00:19:58.043159 8 log.go:172] (0xc0013ecd20) (1) Data frame sent I0420 00:19:58.043170 8 log.go:172] (0xc0067fed10) (0xc0013ecd20) Stream removed, broadcasting: 1 I0420 00:19:58.043182 8 log.go:172] (0xc0067fed10) Go away received I0420 00:19:58.043301 8 log.go:172] (0xc0067fed10) (0xc0013ecd20) Stream removed, broadcasting: 1 I0420 00:19:58.043350 8 log.go:172] (0xc0067fed10) (0xc0018d6fa0) Stream removed, broadcasting: 3 I0420 00:19:58.043391 8 log.go:172] (0xc0067fed10) (0xc0018d7040) Stream removed, broadcasting: 5 Apr 20 00:19:58.043: INFO: Exec stderr: "" Apr 20 00:19:58.043: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8993 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:19:58.043: INFO: >>> kubeConfig: /root/.kube/config I0420 00:19:58.080136 8 log.go:172] (0xc0070f8160) (0xc001a6d680) Create stream I0420 00:19:58.080171 8 log.go:172] (0xc0070f8160) (0xc001a6d680) Stream added, broadcasting: 1 I0420 00:19:58.082577 8 log.go:172] (0xc0070f8160) Reply frame received for 1 I0420 00:19:58.082621 8 log.go:172] (0xc0070f8160) (0xc0013ecf00) Create stream I0420 00:19:58.082644 8 log.go:172] (0xc0070f8160) (0xc0013ecf00) Stream added, broadcasting: 3 I0420 00:19:58.083989 8 log.go:172] (0xc0070f8160) Reply frame received for 3 I0420 00:19:58.084024 8 log.go:172] (0xc0070f8160) (0xc001037400) Create stream I0420 00:19:58.084036 8 log.go:172] (0xc0070f8160) (0xc001037400) Stream added, broadcasting: 5 I0420 00:19:58.084981 8 log.go:172] (0xc0070f8160) Reply frame received for 5 I0420 00:19:58.154484 8 log.go:172] (0xc0070f8160) Data frame received for 3 I0420 00:19:58.154507 8 log.go:172] (0xc0013ecf00) (3) Data frame handling I0420 00:19:58.154519 8 log.go:172] (0xc0013ecf00) (3) Data frame sent I0420 00:19:58.154592 8 log.go:172] (0xc0070f8160) Data frame received for 3 I0420 00:19:58.154609 8 log.go:172] (0xc0013ecf00) (3) Data frame handling I0420 00:19:58.155173 8 log.go:172] (0xc0070f8160) Data frame received for 5 I0420 00:19:58.155200 8 log.go:172] (0xc001037400) (5) Data frame handling I0420 00:19:58.156658 8 log.go:172] (0xc0070f8160) Data frame received for 1 I0420 00:19:58.156679 8 log.go:172] (0xc001a6d680) (1) Data frame handling I0420 00:19:58.156698 8 log.go:172] (0xc001a6d680) (1) Data frame sent I0420 00:19:58.156764 8 log.go:172] (0xc0070f8160) (0xc001a6d680) Stream removed, broadcasting: 1 I0420 00:19:58.156870 8 log.go:172] (0xc0070f8160) (0xc001a6d680) Stream removed, broadcasting: 1 I0420 00:19:58.156912 8 log.go:172] (0xc0070f8160) (0xc0013ecf00) Stream removed, broadcasting: 3 I0420 00:19:58.156935 8 log.go:172] (0xc0070f8160) (0xc001037400) Stream removed, broadcasting: 5 Apr 20 00:19:58.156: INFO: Exec stderr: "" I0420 00:19:58.156974 8 log.go:172] (0xc0070f8160) Go away received [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:19:58.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8993" for this suite. • [SLOW TEST:11.248 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2513,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:19:58.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 20 00:19:58.209: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:20:14.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3726" for this suite. • [SLOW TEST:16.798 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":129,"skipped":2549,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:20:14.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Apr 20 00:20:19.161: INFO: Pod pod-hostip-c0cfd201-77ac-4946-a89e-95714b0b6c51 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:20:19.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5707" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":130,"skipped":2569,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:20:19.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:20:35.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1033" for this suite. • [SLOW TEST:16.139 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":131,"skipped":2586,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:20:35.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 20 00:20:35.388: INFO: Waiting up to 5m0s for pod "pod-599cb7e2-7fa8-40fb-9e6c-7e69a605cf0e" in namespace "emptydir-7033" to be "Succeeded or Failed" Apr 20 00:20:35.392: INFO: Pod "pod-599cb7e2-7fa8-40fb-9e6c-7e69a605cf0e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.685486ms Apr 20 00:20:37.400: INFO: Pod "pod-599cb7e2-7fa8-40fb-9e6c-7e69a605cf0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011958805s Apr 20 00:20:39.404: INFO: Pod "pod-599cb7e2-7fa8-40fb-9e6c-7e69a605cf0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015539221s STEP: Saw pod success Apr 20 00:20:39.404: INFO: Pod "pod-599cb7e2-7fa8-40fb-9e6c-7e69a605cf0e" satisfied condition "Succeeded or Failed" Apr 20 00:20:39.407: INFO: Trying to get logs from node latest-worker pod pod-599cb7e2-7fa8-40fb-9e6c-7e69a605cf0e container test-container: STEP: delete the pod Apr 20 00:20:39.456: INFO: Waiting for pod pod-599cb7e2-7fa8-40fb-9e6c-7e69a605cf0e to disappear Apr 20 00:20:39.470: INFO: Pod pod-599cb7e2-7fa8-40fb-9e6c-7e69a605cf0e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:20:39.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7033" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2593,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:20:39.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 20 00:20:39.911: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 20 00:20:41.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938839, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938839, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938839, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722938839, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 20 00:20:44.971: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:20:44.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9295-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:20:46.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6619" for this suite. STEP: Destroying namespace "webhook-6619-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.825 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":133,"skipped":2602,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:20:46.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 20 00:20:46.377: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3436' Apr 20 00:20:46.697: INFO: stderr: "" Apr 20 00:20:46.698: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 20 00:20:46.698: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3436' Apr 20 00:20:46.983: INFO: stderr: "" Apr 20 00:20:46.983: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Apr 20 00:20:51.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3436' Apr 20 00:20:52.086: INFO: stderr: "" Apr 20 00:20:52.086: INFO: stdout: "update-demo-nautilus-dtpmp update-demo-nautilus-fr9sh " Apr 20 00:20:52.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dtpmp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3436' Apr 20 00:20:52.186: INFO: stderr: "" Apr 20 00:20:52.187: INFO: stdout: "true" Apr 20 00:20:52.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dtpmp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3436' Apr 20 00:20:52.273: INFO: stderr: "" Apr 20 00:20:52.273: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 20 00:20:52.273: INFO: validating pod update-demo-nautilus-dtpmp Apr 20 00:20:52.277: INFO: got data: { "image": "nautilus.jpg" } Apr 20 00:20:52.277: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 20 00:20:52.277: INFO: update-demo-nautilus-dtpmp is verified up and running Apr 20 00:20:52.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fr9sh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3436' Apr 20 00:20:52.377: INFO: stderr: "" Apr 20 00:20:52.377: INFO: stdout: "true" Apr 20 00:20:52.377: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fr9sh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3436' Apr 20 00:20:52.493: INFO: stderr: "" Apr 20 00:20:52.493: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 20 00:20:52.493: INFO: validating pod update-demo-nautilus-fr9sh Apr 20 00:20:52.497: INFO: got data: { "image": "nautilus.jpg" } Apr 20 00:20:52.497: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 20 00:20:52.497: INFO: update-demo-nautilus-fr9sh is verified up and running STEP: using delete to clean up resources Apr 20 00:20:52.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3436' Apr 20 00:20:52.595: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 20 00:20:52.595: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 20 00:20:52.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3436' Apr 20 00:20:52.697: INFO: stderr: "No resources found in kubectl-3436 namespace.\n" Apr 20 00:20:52.697: INFO: stdout: "" Apr 20 00:20:52.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3436 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 20 00:20:52.798: INFO: stderr: "" Apr 20 00:20:52.798: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:20:52.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3436" for this suite. • [SLOW TEST:6.502 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":134,"skipped":2615,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:20:52.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 20 00:20:52.999: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4628 /api/v1/namespaces/watch-4628/configmaps/e2e-watch-test-label-changed 2da9cc40-1322-480b-903c-be0ee30905a4 9463386 0 2020-04-20 00:20:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 20 00:20:52.999: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4628 /api/v1/namespaces/watch-4628/configmaps/e2e-watch-test-label-changed 2da9cc40-1322-480b-903c-be0ee30905a4 9463387 0 2020-04-20 00:20:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 20 00:20:52.999: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4628 /api/v1/namespaces/watch-4628/configmaps/e2e-watch-test-label-changed 2da9cc40-1322-480b-903c-be0ee30905a4 9463388 0 2020-04-20 00:20:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 20 00:21:03.100: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4628 /api/v1/namespaces/watch-4628/configmaps/e2e-watch-test-label-changed 2da9cc40-1322-480b-903c-be0ee30905a4 9463440 0 2020-04-20 00:20:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 20 00:21:03.100: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4628 /api/v1/namespaces/watch-4628/configmaps/e2e-watch-test-label-changed 2da9cc40-1322-480b-903c-be0ee30905a4 9463441 0 2020-04-20 00:20:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 20 00:21:03.100: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4628 /api/v1/namespaces/watch-4628/configmaps/e2e-watch-test-label-changed 2da9cc40-1322-480b-903c-be0ee30905a4 9463442 0 2020-04-20 00:20:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:21:03.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4628" for this suite. • [SLOW TEST:10.308 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":135,"skipped":2627,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:21:03.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 20 00:21:03.190: INFO: Waiting up to 5m0s for pod "downwardapi-volume-02ffb94d-ddb4-41d4-b483-03e5415de8a9" in namespace "projected-7877" to be "Succeeded or Failed" Apr 20 00:21:03.193: INFO: Pod "downwardapi-volume-02ffb94d-ddb4-41d4-b483-03e5415de8a9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.411718ms Apr 20 00:21:05.198: INFO: Pod "downwardapi-volume-02ffb94d-ddb4-41d4-b483-03e5415de8a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008009283s Apr 20 00:21:07.202: INFO: Pod "downwardapi-volume-02ffb94d-ddb4-41d4-b483-03e5415de8a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012409242s STEP: Saw pod success Apr 20 00:21:07.203: INFO: Pod "downwardapi-volume-02ffb94d-ddb4-41d4-b483-03e5415de8a9" satisfied condition "Succeeded or Failed" Apr 20 00:21:07.206: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-02ffb94d-ddb4-41d4-b483-03e5415de8a9 container client-container: STEP: delete the pod Apr 20 00:21:07.242: INFO: Waiting for pod downwardapi-volume-02ffb94d-ddb4-41d4-b483-03e5415de8a9 to disappear Apr 20 00:21:07.254: INFO: Pod downwardapi-volume-02ffb94d-ddb4-41d4-b483-03e5415de8a9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:21:07.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7877" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2659,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:21:07.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 20 00:21:11.872: INFO: Successfully updated pod "annotationupdate30aa837e-a6b1-4021-8fd3-939fdf319f5f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:21:13.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9843" for this suite. • [SLOW TEST:6.635 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2672,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:21:13.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Apr 20 00:21:14.004: INFO: Waiting up to 5m0s for pod "var-expansion-33dc177e-be22-42b3-8981-0e59a1cf6083" in namespace "var-expansion-5597" to be "Succeeded or Failed" Apr 20 00:21:14.010: INFO: Pod "var-expansion-33dc177e-be22-42b3-8981-0e59a1cf6083": Phase="Pending", Reason="", readiness=false. Elapsed: 5.556639ms Apr 20 00:21:16.777: INFO: Pod "var-expansion-33dc177e-be22-42b3-8981-0e59a1cf6083": Phase="Pending", Reason="", readiness=false. Elapsed: 2.773054554s Apr 20 00:21:18.782: INFO: Pod "var-expansion-33dc177e-be22-42b3-8981-0e59a1cf6083": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.777493565s STEP: Saw pod success Apr 20 00:21:18.782: INFO: Pod "var-expansion-33dc177e-be22-42b3-8981-0e59a1cf6083" satisfied condition "Succeeded or Failed" Apr 20 00:21:18.784: INFO: Trying to get logs from node latest-worker2 pod var-expansion-33dc177e-be22-42b3-8981-0e59a1cf6083 container dapi-container: STEP: delete the pod Apr 20 00:21:18.963: INFO: Waiting for pod var-expansion-33dc177e-be22-42b3-8981-0e59a1cf6083 to disappear Apr 20 00:21:18.998: INFO: Pod var-expansion-33dc177e-be22-42b3-8981-0e59a1cf6083 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:21:18.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5597" for this suite. • [SLOW TEST:5.195 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2675,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:21:19.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:21:19.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4526" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":139,"skipped":2680,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:21:19.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-dd9b176d-2e90-4aca-8576-83c381c8c7a6 in namespace container-probe-2125 Apr 20 00:21:23.364: INFO: Started pod liveness-dd9b176d-2e90-4aca-8576-83c381c8c7a6 in namespace container-probe-2125 STEP: checking the pod's current state and verifying that restartCount is present Apr 20 00:21:23.367: INFO: Initial restart count of pod liveness-dd9b176d-2e90-4aca-8576-83c381c8c7a6 is 0 Apr 20 00:21:41.544: INFO: Restart count of pod container-probe-2125/liveness-dd9b176d-2e90-4aca-8576-83c381c8c7a6 is now 1 (18.176597775s elapsed) Apr 20 00:22:01.583: INFO: Restart count of pod container-probe-2125/liveness-dd9b176d-2e90-4aca-8576-83c381c8c7a6 is now 2 (38.215206542s elapsed) Apr 20 00:22:21.627: INFO: Restart count of pod container-probe-2125/liveness-dd9b176d-2e90-4aca-8576-83c381c8c7a6 is now 3 (58.259813331s elapsed) Apr 20 00:22:41.669: INFO: Restart count of pod container-probe-2125/liveness-dd9b176d-2e90-4aca-8576-83c381c8c7a6 is now 4 (1m18.301510078s elapsed) Apr 20 00:23:47.807: INFO: Restart count of pod container-probe-2125/liveness-dd9b176d-2e90-4aca-8576-83c381c8c7a6 is now 5 (2m24.439268003s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:23:47.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2125" for this suite. • [SLOW TEST:148.624 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2719,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:23:47.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 20 00:23:47.924: INFO: Waiting up to 5m0s for pod "pod-1e1b003c-7fe6-4424-b6fe-112cae3937be" in namespace "emptydir-5440" to be "Succeeded or Failed" Apr 20 00:23:47.961: INFO: Pod "pod-1e1b003c-7fe6-4424-b6fe-112cae3937be": Phase="Pending", Reason="", readiness=false. Elapsed: 36.254479ms Apr 20 00:23:49.964: INFO: Pod "pod-1e1b003c-7fe6-4424-b6fe-112cae3937be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039901758s Apr 20 00:23:51.969: INFO: Pod "pod-1e1b003c-7fe6-4424-b6fe-112cae3937be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044345805s STEP: Saw pod success Apr 20 00:23:51.969: INFO: Pod "pod-1e1b003c-7fe6-4424-b6fe-112cae3937be" satisfied condition "Succeeded or Failed" Apr 20 00:23:51.972: INFO: Trying to get logs from node latest-worker2 pod pod-1e1b003c-7fe6-4424-b6fe-112cae3937be container test-container: STEP: delete the pod Apr 20 00:23:52.007: INFO: Waiting for pod pod-1e1b003c-7fe6-4424-b6fe-112cae3937be to disappear Apr 20 00:23:52.011: INFO: Pod pod-1e1b003c-7fe6-4424-b6fe-112cae3937be no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:23:52.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5440" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2735,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:23:52.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Apr 20 00:23:52.085: INFO: Waiting up to 5m0s for pod "client-containers-f08939a8-80b4-4ba2-bd0f-7020745aa824" in namespace "containers-5282" to be "Succeeded or Failed" Apr 20 00:23:52.105: INFO: Pod "client-containers-f08939a8-80b4-4ba2-bd0f-7020745aa824": Phase="Pending", Reason="", readiness=false. Elapsed: 19.478702ms Apr 20 00:23:54.110: INFO: Pod "client-containers-f08939a8-80b4-4ba2-bd0f-7020745aa824": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024209372s Apr 20 00:23:56.114: INFO: Pod "client-containers-f08939a8-80b4-4ba2-bd0f-7020745aa824": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028549763s STEP: Saw pod success Apr 20 00:23:56.114: INFO: Pod "client-containers-f08939a8-80b4-4ba2-bd0f-7020745aa824" satisfied condition "Succeeded or Failed" Apr 20 00:23:56.117: INFO: Trying to get logs from node latest-worker2 pod client-containers-f08939a8-80b4-4ba2-bd0f-7020745aa824 container test-container: STEP: delete the pod Apr 20 00:23:56.190: INFO: Waiting for pod client-containers-f08939a8-80b4-4ba2-bd0f-7020745aa824 to disappear Apr 20 00:23:56.197: INFO: Pod client-containers-f08939a8-80b4-4ba2-bd0f-7020745aa824 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:23:56.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5282" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2755,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:23:56.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-347 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-347;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-347 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-347;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-347.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-347.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-347.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-347.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-347.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-347.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-347.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-347.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-347.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-347.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-347.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-347.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-347.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 225.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.225_udp@PTR;check="$$(dig +tcp +noall +answer +search 225.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.225_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-347 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-347;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-347 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-347;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-347.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-347.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-347.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-347.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-347.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-347.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-347.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-347.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-347.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-347.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-347.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-347.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-347.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 225.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.225_udp@PTR;check="$$(dig +tcp +noall +answer +search 225.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.225_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 20 00:24:02.375: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.380: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.383: INFO: Unable to read wheezy_udp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.387: INFO: Unable to read wheezy_tcp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.390: INFO: Unable to read wheezy_udp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.393: INFO: Unable to read wheezy_tcp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.396: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.398: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.419: INFO: Unable to read jessie_udp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.422: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.425: INFO: Unable to read jessie_udp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.428: INFO: Unable to read jessie_tcp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.431: INFO: Unable to read jessie_udp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.434: INFO: Unable to read jessie_tcp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.437: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.440: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:02.457: INFO: Lookups using dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-347 wheezy_tcp@dns-test-service.dns-347 wheezy_udp@dns-test-service.dns-347.svc wheezy_tcp@dns-test-service.dns-347.svc wheezy_udp@_http._tcp.dns-test-service.dns-347.svc wheezy_tcp@_http._tcp.dns-test-service.dns-347.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-347 jessie_tcp@dns-test-service.dns-347 jessie_udp@dns-test-service.dns-347.svc jessie_tcp@dns-test-service.dns-347.svc jessie_udp@_http._tcp.dns-test-service.dns-347.svc jessie_tcp@_http._tcp.dns-test-service.dns-347.svc] Apr 20 00:24:07.462: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.465: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.468: INFO: Unable to read wheezy_udp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.471: INFO: Unable to read wheezy_tcp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.475: INFO: Unable to read wheezy_udp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.478: INFO: Unable to read wheezy_tcp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.480: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.484: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.502: INFO: Unable to read jessie_udp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.505: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.507: INFO: Unable to read jessie_udp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.509: INFO: Unable to read jessie_tcp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.511: INFO: Unable to read jessie_udp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.514: INFO: Unable to read jessie_tcp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.517: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.520: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:07.534: INFO: Lookups using dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-347 wheezy_tcp@dns-test-service.dns-347 wheezy_udp@dns-test-service.dns-347.svc wheezy_tcp@dns-test-service.dns-347.svc wheezy_udp@_http._tcp.dns-test-service.dns-347.svc wheezy_tcp@_http._tcp.dns-test-service.dns-347.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-347 jessie_tcp@dns-test-service.dns-347 jessie_udp@dns-test-service.dns-347.svc jessie_tcp@dns-test-service.dns-347.svc jessie_udp@_http._tcp.dns-test-service.dns-347.svc jessie_tcp@_http._tcp.dns-test-service.dns-347.svc] Apr 20 00:24:12.461: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.464: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.466: INFO: Unable to read wheezy_udp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.469: INFO: Unable to read wheezy_tcp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.471: INFO: Unable to read wheezy_udp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.473: INFO: Unable to read wheezy_tcp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.476: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.478: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.495: INFO: Unable to read jessie_udp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.498: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.500: INFO: Unable to read jessie_udp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.503: INFO: Unable to read jessie_tcp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.506: INFO: Unable to read jessie_udp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.509: INFO: Unable to read jessie_tcp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.511: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.514: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:12.530: INFO: Lookups using dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-347 wheezy_tcp@dns-test-service.dns-347 wheezy_udp@dns-test-service.dns-347.svc wheezy_tcp@dns-test-service.dns-347.svc wheezy_udp@_http._tcp.dns-test-service.dns-347.svc wheezy_tcp@_http._tcp.dns-test-service.dns-347.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-347 jessie_tcp@dns-test-service.dns-347 jessie_udp@dns-test-service.dns-347.svc jessie_tcp@dns-test-service.dns-347.svc jessie_udp@_http._tcp.dns-test-service.dns-347.svc jessie_tcp@_http._tcp.dns-test-service.dns-347.svc] Apr 20 00:24:17.462: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.466: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.469: INFO: Unable to read wheezy_udp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.471: INFO: Unable to read wheezy_tcp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.474: INFO: Unable to read wheezy_udp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.476: INFO: Unable to read wheezy_tcp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.478: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.481: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.500: INFO: Unable to read jessie_udp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.503: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.507: INFO: Unable to read jessie_udp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.510: INFO: Unable to read jessie_tcp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.512: INFO: Unable to read jessie_udp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.515: INFO: Unable to read jessie_tcp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.518: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.520: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:17.535: INFO: Lookups using dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-347 wheezy_tcp@dns-test-service.dns-347 wheezy_udp@dns-test-service.dns-347.svc wheezy_tcp@dns-test-service.dns-347.svc wheezy_udp@_http._tcp.dns-test-service.dns-347.svc wheezy_tcp@_http._tcp.dns-test-service.dns-347.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-347 jessie_tcp@dns-test-service.dns-347 jessie_udp@dns-test-service.dns-347.svc jessie_tcp@dns-test-service.dns-347.svc jessie_udp@_http._tcp.dns-test-service.dns-347.svc jessie_tcp@_http._tcp.dns-test-service.dns-347.svc] Apr 20 00:24:22.463: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.467: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.470: INFO: Unable to read wheezy_udp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.473: INFO: Unable to read wheezy_tcp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.475: INFO: Unable to read wheezy_udp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.478: INFO: Unable to read wheezy_tcp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.480: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.483: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.505: INFO: Unable to read jessie_udp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.508: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.511: INFO: Unable to read jessie_udp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.515: INFO: Unable to read jessie_tcp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.517: INFO: Unable to read jessie_udp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.520: INFO: Unable to read jessie_tcp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.523: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.526: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:22.544: INFO: Lookups using dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-347 wheezy_tcp@dns-test-service.dns-347 wheezy_udp@dns-test-service.dns-347.svc wheezy_tcp@dns-test-service.dns-347.svc wheezy_udp@_http._tcp.dns-test-service.dns-347.svc wheezy_tcp@_http._tcp.dns-test-service.dns-347.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-347 jessie_tcp@dns-test-service.dns-347 jessie_udp@dns-test-service.dns-347.svc jessie_tcp@dns-test-service.dns-347.svc jessie_udp@_http._tcp.dns-test-service.dns-347.svc jessie_tcp@_http._tcp.dns-test-service.dns-347.svc] Apr 20 00:24:27.461: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.464: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.467: INFO: Unable to read wheezy_udp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.470: INFO: Unable to read wheezy_tcp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.473: INFO: Unable to read wheezy_udp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.475: INFO: Unable to read wheezy_tcp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.477: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.479: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.498: INFO: Unable to read jessie_udp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.500: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.503: INFO: Unable to read jessie_udp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.506: INFO: Unable to read jessie_tcp@dns-test-service.dns-347 from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.508: INFO: Unable to read jessie_udp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.511: INFO: Unable to read jessie_tcp@dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.513: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.516: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-347.svc from pod dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff: the server could not find the requested resource (get pods dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff) Apr 20 00:24:27.531: INFO: Lookups using dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-347 wheezy_tcp@dns-test-service.dns-347 wheezy_udp@dns-test-service.dns-347.svc wheezy_tcp@dns-test-service.dns-347.svc wheezy_udp@_http._tcp.dns-test-service.dns-347.svc wheezy_tcp@_http._tcp.dns-test-service.dns-347.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-347 jessie_tcp@dns-test-service.dns-347 jessie_udp@dns-test-service.dns-347.svc jessie_tcp@dns-test-service.dns-347.svc jessie_udp@_http._tcp.dns-test-service.dns-347.svc jessie_tcp@_http._tcp.dns-test-service.dns-347.svc] Apr 20 00:24:32.543: INFO: DNS probes using dns-347/dns-test-7ff33e56-1f71-485d-aaa3-aa851f9370ff succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:24:33.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-347" for this suite. • [SLOW TEST:36.992 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":143,"skipped":2762,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:24:33.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3927.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3927.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3927.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3927.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3927.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3927.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 20 00:24:39.405: INFO: DNS probes using dns-3927/dns-test-31d06ba4-6909-4ec6-bc2e-6acad9d0800e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:24:39.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3927" for this suite. • [SLOW TEST:6.252 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":144,"skipped":2766,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:24:39.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 20 00:24:39.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-123' Apr 20 00:24:39.593: INFO: stderr: "" Apr 20 00:24:39.593: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Apr 20 00:24:39.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-123' Apr 20 00:24:52.745: INFO: stderr: "" Apr 20 00:24:52.745: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:24:52.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-123" for this suite. • [SLOW TEST:13.301 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":145,"skipped":2767,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:24:52.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 20 00:25:00.874: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 00:25:00.895: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 00:25:02.895: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 00:25:02.939: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 00:25:04.895: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 00:25:04.932: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 00:25:06.895: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 00:25:06.899: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 00:25:08.895: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 00:25:08.900: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 00:25:10.895: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 00:25:10.899: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 00:25:12.895: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 00:25:12.899: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 00:25:14.895: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 00:25:14.899: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:25:14.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-773" for this suite. • [SLOW TEST:22.169 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2792,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:25:14.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 20 00:25:14.967: INFO: >>> kubeConfig: /root/.kube/config Apr 20 00:25:17.915: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:25:28.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1353" for this suite. • [SLOW TEST:13.557 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":147,"skipped":2819,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:25:28.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 20 00:25:28.564: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97d83fea-5bba-4492-9470-67c3c0f1d4cc" in namespace "projected-8697" to be "Succeeded or Failed" Apr 20 00:25:28.571: INFO: Pod "downwardapi-volume-97d83fea-5bba-4492-9470-67c3c0f1d4cc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.426649ms Apr 20 00:25:30.575: INFO: Pod "downwardapi-volume-97d83fea-5bba-4492-9470-67c3c0f1d4cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011595747s Apr 20 00:25:32.580: INFO: Pod "downwardapi-volume-97d83fea-5bba-4492-9470-67c3c0f1d4cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016079252s STEP: Saw pod success Apr 20 00:25:32.580: INFO: Pod "downwardapi-volume-97d83fea-5bba-4492-9470-67c3c0f1d4cc" satisfied condition "Succeeded or Failed" Apr 20 00:25:32.583: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-97d83fea-5bba-4492-9470-67c3c0f1d4cc container client-container: STEP: delete the pod Apr 20 00:25:32.618: INFO: Waiting for pod downwardapi-volume-97d83fea-5bba-4492-9470-67c3c0f1d4cc to disappear Apr 20 00:25:32.630: INFO: Pod downwardapi-volume-97d83fea-5bba-4492-9470-67c3c0f1d4cc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:25:32.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8697" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2825,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:25:32.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 20 00:25:32.740: INFO: Waiting up to 5m0s for pod "downward-api-f97e62b8-9f54-4a4a-81a7-ddd1ca6785ca" in namespace "downward-api-9906" to be "Succeeded or Failed" Apr 20 00:25:32.744: INFO: Pod "downward-api-f97e62b8-9f54-4a4a-81a7-ddd1ca6785ca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.606335ms Apr 20 00:25:34.749: INFO: Pod "downward-api-f97e62b8-9f54-4a4a-81a7-ddd1ca6785ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008329972s Apr 20 00:25:36.753: INFO: Pod "downward-api-f97e62b8-9f54-4a4a-81a7-ddd1ca6785ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011993886s STEP: Saw pod success Apr 20 00:25:36.753: INFO: Pod "downward-api-f97e62b8-9f54-4a4a-81a7-ddd1ca6785ca" satisfied condition "Succeeded or Failed" Apr 20 00:25:36.755: INFO: Trying to get logs from node latest-worker pod downward-api-f97e62b8-9f54-4a4a-81a7-ddd1ca6785ca container dapi-container: STEP: delete the pod Apr 20 00:25:36.808: INFO: Waiting for pod downward-api-f97e62b8-9f54-4a4a-81a7-ddd1ca6785ca to disappear Apr 20 00:25:36.822: INFO: Pod downward-api-f97e62b8-9f54-4a4a-81a7-ddd1ca6785ca no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:25:36.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9906" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2832,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:25:36.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:25:41.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3405" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":150,"skipped":2844,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:25:41.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 20 00:25:41.428: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 20 00:25:46.442: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:25:47.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9206" for this suite. • [SLOW TEST:6.318 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":151,"skipped":2847,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:25:47.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-7af49dfd-0405-4f54-98df-b150750656e4 STEP: Creating a pod to test consume configMaps Apr 20 00:25:47.617: INFO: Waiting up to 5m0s for pod "pod-configmaps-cfb946b0-8bb2-4cc4-9d82-ef9ed0d5c8b8" in namespace "configmap-2240" to be "Succeeded or Failed" Apr 20 00:25:47.619: INFO: Pod "pod-configmaps-cfb946b0-8bb2-4cc4-9d82-ef9ed0d5c8b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.663299ms Apr 20 00:25:49.624: INFO: Pod "pod-configmaps-cfb946b0-8bb2-4cc4-9d82-ef9ed0d5c8b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007283417s Apr 20 00:25:51.629: INFO: Pod "pod-configmaps-cfb946b0-8bb2-4cc4-9d82-ef9ed0d5c8b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012426041s STEP: Saw pod success Apr 20 00:25:51.629: INFO: Pod "pod-configmaps-cfb946b0-8bb2-4cc4-9d82-ef9ed0d5c8b8" satisfied condition "Succeeded or Failed" Apr 20 00:25:51.632: INFO: Trying to get logs from node latest-worker pod pod-configmaps-cfb946b0-8bb2-4cc4-9d82-ef9ed0d5c8b8 container configmap-volume-test: STEP: delete the pod Apr 20 00:25:51.650: INFO: Waiting for pod pod-configmaps-cfb946b0-8bb2-4cc4-9d82-ef9ed0d5c8b8 to disappear Apr 20 00:25:51.655: INFO: Pod pod-configmaps-cfb946b0-8bb2-4cc4-9d82-ef9ed0d5c8b8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:25:51.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2240" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2859,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:25:51.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-721a483f-6b25-4571-bab8-32819fff20f2 STEP: Creating secret with name s-test-opt-upd-50b7662b-78d2-4ab1-811a-61ffdba6bbab STEP: Creating the pod STEP: Deleting secret s-test-opt-del-721a483f-6b25-4571-bab8-32819fff20f2 STEP: Updating secret s-test-opt-upd-50b7662b-78d2-4ab1-811a-61ffdba6bbab STEP: Creating secret with name s-test-opt-create-242aeda7-af7a-4ed2-b8af-9147368ba68c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:25:59.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8238" for this suite. • [SLOW TEST:8.274 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2885,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:25:59.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:26:11.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3317" for this suite. • [SLOW TEST:11.277 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":154,"skipped":2899,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:26:11.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7270.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7270.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7270.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7270.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7270.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7270.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 20 00:26:17.404: INFO: DNS probes using dns-7270/dns-test-fe10a8b5-a428-4a49-af50-0dd8ad1dfc7e succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:26:17.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7270" for this suite. • [SLOW TEST:6.315 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":155,"skipped":2905,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:26:17.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-8149 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 20 00:26:18.293: INFO: Found 0 stateful pods, waiting for 3 Apr 20 00:26:28.298: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 20 00:26:28.298: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 20 00:26:28.298: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 20 00:26:28.309: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8149 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 20 00:26:28.550: INFO: stderr: "I0420 00:26:28.437724 1876 log.go:172] (0xc00003ac60) (0xc000990000) Create stream\nI0420 00:26:28.437808 1876 log.go:172] (0xc00003ac60) (0xc000990000) Stream added, broadcasting: 1\nI0420 00:26:28.440438 1876 log.go:172] (0xc00003ac60) Reply frame received for 1\nI0420 00:26:28.440487 1876 log.go:172] (0xc00003ac60) (0xc000a1c000) Create stream\nI0420 00:26:28.440500 1876 log.go:172] (0xc00003ac60) (0xc000a1c000) Stream added, broadcasting: 3\nI0420 00:26:28.441677 1876 log.go:172] (0xc00003ac60) Reply frame received for 3\nI0420 00:26:28.441717 1876 log.go:172] (0xc00003ac60) (0xc0009900a0) Create stream\nI0420 00:26:28.441734 1876 log.go:172] (0xc00003ac60) (0xc0009900a0) Stream added, broadcasting: 5\nI0420 00:26:28.442629 1876 log.go:172] (0xc00003ac60) Reply frame received for 5\nI0420 00:26:28.515433 1876 log.go:172] (0xc00003ac60) Data frame received for 5\nI0420 00:26:28.515463 1876 log.go:172] (0xc0009900a0) (5) Data frame handling\nI0420 00:26:28.515484 1876 log.go:172] (0xc0009900a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0420 00:26:28.543047 1876 log.go:172] (0xc00003ac60) Data frame received for 3\nI0420 00:26:28.543132 1876 log.go:172] (0xc000a1c000) (3) Data frame handling\nI0420 00:26:28.543171 1876 log.go:172] (0xc000a1c000) (3) Data frame sent\nI0420 00:26:28.543186 1876 log.go:172] (0xc00003ac60) Data frame received for 3\nI0420 00:26:28.543200 1876 log.go:172] (0xc000a1c000) (3) Data frame handling\nI0420 00:26:28.543247 1876 log.go:172] (0xc00003ac60) Data frame received for 5\nI0420 00:26:28.543274 1876 log.go:172] (0xc0009900a0) (5) Data frame handling\nI0420 00:26:28.545284 1876 log.go:172] (0xc00003ac60) Data frame received for 1\nI0420 00:26:28.545384 1876 log.go:172] (0xc000990000) (1) Data frame handling\nI0420 00:26:28.545430 1876 log.go:172] (0xc000990000) (1) Data frame sent\nI0420 00:26:28.545442 1876 log.go:172] (0xc00003ac60) (0xc000990000) Stream removed, broadcasting: 1\nI0420 00:26:28.545453 1876 log.go:172] (0xc00003ac60) Go away received\nI0420 00:26:28.545995 1876 log.go:172] (0xc00003ac60) (0xc000990000) Stream removed, broadcasting: 1\nI0420 00:26:28.546020 1876 log.go:172] (0xc00003ac60) (0xc000a1c000) Stream removed, broadcasting: 3\nI0420 00:26:28.546033 1876 log.go:172] (0xc00003ac60) (0xc0009900a0) Stream removed, broadcasting: 5\n" Apr 20 00:26:28.550: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 20 00:26:28.550: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 20 00:26:38.596: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 20 00:26:48.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8149 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 20 00:26:48.842: INFO: stderr: "I0420 00:26:48.749293 1896 log.go:172] (0xc0000e14a0) (0xc000796000) Create stream\nI0420 00:26:48.749344 1896 log.go:172] (0xc0000e14a0) (0xc000796000) Stream added, broadcasting: 1\nI0420 00:26:48.751464 1896 log.go:172] (0xc0000e14a0) Reply frame received for 1\nI0420 00:26:48.751522 1896 log.go:172] (0xc0000e14a0) (0xc0007ea000) Create stream\nI0420 00:26:48.751541 1896 log.go:172] (0xc0000e14a0) (0xc0007ea000) Stream added, broadcasting: 3\nI0420 00:26:48.752451 1896 log.go:172] (0xc0000e14a0) Reply frame received for 3\nI0420 00:26:48.752493 1896 log.go:172] (0xc0000e14a0) (0xc0007ea0a0) Create stream\nI0420 00:26:48.752520 1896 log.go:172] (0xc0000e14a0) (0xc0007ea0a0) Stream added, broadcasting: 5\nI0420 00:26:48.754089 1896 log.go:172] (0xc0000e14a0) Reply frame received for 5\nI0420 00:26:48.835670 1896 log.go:172] (0xc0000e14a0) Data frame received for 5\nI0420 00:26:48.835707 1896 log.go:172] (0xc0007ea0a0) (5) Data frame handling\nI0420 00:26:48.835723 1896 log.go:172] (0xc0007ea0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0420 00:26:48.835739 1896 log.go:172] (0xc0000e14a0) Data frame received for 3\nI0420 00:26:48.835746 1896 log.go:172] (0xc0007ea000) (3) Data frame handling\nI0420 00:26:48.835753 1896 log.go:172] (0xc0007ea000) (3) Data frame sent\nI0420 00:26:48.835759 1896 log.go:172] (0xc0000e14a0) Data frame received for 3\nI0420 00:26:48.835766 1896 log.go:172] (0xc0007ea000) (3) Data frame handling\nI0420 00:26:48.835786 1896 log.go:172] (0xc0000e14a0) Data frame received for 5\nI0420 00:26:48.835804 1896 log.go:172] (0xc0007ea0a0) (5) Data frame handling\nI0420 00:26:48.837489 1896 log.go:172] (0xc0000e14a0) Data frame received for 1\nI0420 00:26:48.837511 1896 log.go:172] (0xc000796000) (1) Data frame handling\nI0420 00:26:48.837536 1896 log.go:172] (0xc000796000) (1) Data frame sent\nI0420 00:26:48.837553 1896 log.go:172] (0xc0000e14a0) (0xc000796000) Stream removed, broadcasting: 1\nI0420 00:26:48.837572 1896 log.go:172] (0xc0000e14a0) Go away received\nI0420 00:26:48.837879 1896 log.go:172] (0xc0000e14a0) (0xc000796000) Stream removed, broadcasting: 1\nI0420 00:26:48.837900 1896 log.go:172] (0xc0000e14a0) (0xc0007ea000) Stream removed, broadcasting: 3\nI0420 00:26:48.837908 1896 log.go:172] (0xc0000e14a0) (0xc0007ea0a0) Stream removed, broadcasting: 5\n" Apr 20 00:26:48.842: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 20 00:26:48.842: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 20 00:26:59.195: INFO: Waiting for StatefulSet statefulset-8149/ss2 to complete update Apr 20 00:26:59.195: INFO: Waiting for Pod statefulset-8149/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 20 00:26:59.195: INFO: Waiting for Pod statefulset-8149/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 20 00:26:59.195: INFO: Waiting for Pod statefulset-8149/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 20 00:27:09.215: INFO: Waiting for StatefulSet statefulset-8149/ss2 to complete update Apr 20 00:27:09.215: INFO: Waiting for Pod statefulset-8149/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 20 00:27:19.204: INFO: Waiting for StatefulSet statefulset-8149/ss2 to complete update Apr 20 00:27:19.204: INFO: Waiting for Pod statefulset-8149/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Apr 20 00:27:29.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8149 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 20 00:27:29.467: INFO: stderr: "I0420 00:27:29.332930 1915 log.go:172] (0xc000ac4c60) (0xc000a96500) Create stream\nI0420 00:27:29.332985 1915 log.go:172] (0xc000ac4c60) (0xc000a96500) Stream added, broadcasting: 1\nI0420 00:27:29.335708 1915 log.go:172] (0xc000ac4c60) Reply frame received for 1\nI0420 00:27:29.335745 1915 log.go:172] (0xc000ac4c60) (0xc000a965a0) Create stream\nI0420 00:27:29.335758 1915 log.go:172] (0xc000ac4c60) (0xc000a965a0) Stream added, broadcasting: 3\nI0420 00:27:29.336803 1915 log.go:172] (0xc000ac4c60) Reply frame received for 3\nI0420 00:27:29.336838 1915 log.go:172] (0xc000ac4c60) (0xc000a96640) Create stream\nI0420 00:27:29.336850 1915 log.go:172] (0xc000ac4c60) (0xc000a96640) Stream added, broadcasting: 5\nI0420 00:27:29.337939 1915 log.go:172] (0xc000ac4c60) Reply frame received for 5\nI0420 00:27:29.431832 1915 log.go:172] (0xc000ac4c60) Data frame received for 5\nI0420 00:27:29.431857 1915 log.go:172] (0xc000a96640) (5) Data frame handling\nI0420 00:27:29.431876 1915 log.go:172] (0xc000a96640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0420 00:27:29.460594 1915 log.go:172] (0xc000ac4c60) Data frame received for 3\nI0420 00:27:29.460614 1915 log.go:172] (0xc000a965a0) (3) Data frame handling\nI0420 00:27:29.460625 1915 log.go:172] (0xc000a965a0) (3) Data frame sent\nI0420 00:27:29.460631 1915 log.go:172] (0xc000ac4c60) Data frame received for 3\nI0420 00:27:29.460638 1915 log.go:172] (0xc000a965a0) (3) Data frame handling\nI0420 00:27:29.460724 1915 log.go:172] (0xc000ac4c60) Data frame received for 5\nI0420 00:27:29.460744 1915 log.go:172] (0xc000a96640) (5) Data frame handling\nI0420 00:27:29.463135 1915 log.go:172] (0xc000ac4c60) Data frame received for 1\nI0420 00:27:29.463149 1915 log.go:172] (0xc000a96500) (1) Data frame handling\nI0420 00:27:29.463156 1915 log.go:172] (0xc000a96500) (1) Data frame sent\nI0420 00:27:29.463262 1915 log.go:172] (0xc000ac4c60) (0xc000a96500) Stream removed, broadcasting: 1\nI0420 00:27:29.463315 1915 log.go:172] (0xc000ac4c60) Go away received\nI0420 00:27:29.463530 1915 log.go:172] (0xc000ac4c60) (0xc000a96500) Stream removed, broadcasting: 1\nI0420 00:27:29.463546 1915 log.go:172] (0xc000ac4c60) (0xc000a965a0) Stream removed, broadcasting: 3\nI0420 00:27:29.463552 1915 log.go:172] (0xc000ac4c60) (0xc000a96640) Stream removed, broadcasting: 5\n" Apr 20 00:27:29.467: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 20 00:27:29.467: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 20 00:27:39.499: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 20 00:27:49.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8149 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 20 00:27:49.763: INFO: stderr: "I0420 00:27:49.667946 1934 log.go:172] (0xc0008e80b0) (0xc00029caa0) Create stream\nI0420 00:27:49.668011 1934 log.go:172] (0xc0008e80b0) (0xc00029caa0) Stream added, broadcasting: 1\nI0420 00:27:49.670975 1934 log.go:172] (0xc0008e80b0) Reply frame received for 1\nI0420 00:27:49.671039 1934 log.go:172] (0xc0008e80b0) (0xc000a5c000) Create stream\nI0420 00:27:49.671058 1934 log.go:172] (0xc0008e80b0) (0xc000a5c000) Stream added, broadcasting: 3\nI0420 00:27:49.672124 1934 log.go:172] (0xc0008e80b0) Reply frame received for 3\nI0420 00:27:49.672155 1934 log.go:172] (0xc0008e80b0) (0xc000ae6000) Create stream\nI0420 00:27:49.672169 1934 log.go:172] (0xc0008e80b0) (0xc000ae6000) Stream added, broadcasting: 5\nI0420 00:27:49.673051 1934 log.go:172] (0xc0008e80b0) Reply frame received for 5\nI0420 00:27:49.755739 1934 log.go:172] (0xc0008e80b0) Data frame received for 5\nI0420 00:27:49.755799 1934 log.go:172] (0xc0008e80b0) Data frame received for 3\nI0420 00:27:49.755862 1934 log.go:172] (0xc000a5c000) (3) Data frame handling\nI0420 00:27:49.755893 1934 log.go:172] (0xc000a5c000) (3) Data frame sent\nI0420 00:27:49.755907 1934 log.go:172] (0xc0008e80b0) Data frame received for 3\nI0420 00:27:49.755917 1934 log.go:172] (0xc000a5c000) (3) Data frame handling\nI0420 00:27:49.755936 1934 log.go:172] (0xc000ae6000) (5) Data frame handling\nI0420 00:27:49.755959 1934 log.go:172] (0xc000ae6000) (5) Data frame sent\nI0420 00:27:49.755973 1934 log.go:172] (0xc0008e80b0) Data frame received for 5\nI0420 00:27:49.756004 1934 log.go:172] (0xc000ae6000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0420 00:27:49.757697 1934 log.go:172] (0xc0008e80b0) Data frame received for 1\nI0420 00:27:49.757732 1934 log.go:172] (0xc00029caa0) (1) Data frame handling\nI0420 00:27:49.757754 1934 log.go:172] (0xc00029caa0) (1) Data frame sent\nI0420 00:27:49.757777 1934 log.go:172] (0xc0008e80b0) (0xc00029caa0) Stream removed, broadcasting: 1\nI0420 00:27:49.757813 1934 log.go:172] (0xc0008e80b0) Go away received\nI0420 00:27:49.758238 1934 log.go:172] (0xc0008e80b0) (0xc00029caa0) Stream removed, broadcasting: 1\nI0420 00:27:49.758259 1934 log.go:172] (0xc0008e80b0) (0xc000a5c000) Stream removed, broadcasting: 3\nI0420 00:27:49.758271 1934 log.go:172] (0xc0008e80b0) (0xc000ae6000) Stream removed, broadcasting: 5\n" Apr 20 00:27:49.763: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 20 00:27:49.763: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 20 00:28:09.784: INFO: Waiting for StatefulSet statefulset-8149/ss2 to complete update Apr 20 00:28:09.784: INFO: Waiting for Pod statefulset-8149/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 20 00:28:19.793: INFO: Deleting all statefulset in ns statefulset-8149 Apr 20 00:28:19.796: INFO: Scaling statefulset ss2 to 0 Apr 20 00:28:49.813: INFO: Waiting for statefulset status.replicas updated to 0 Apr 20 00:28:49.815: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:28:49.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8149" for this suite. • [SLOW TEST:152.306 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":156,"skipped":2915,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:28:49.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:29:05.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6716" for this suite. • [SLOW TEST:16.128 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":157,"skipped":2918,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:29:05.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Apr 20 00:29:06.062: INFO: Waiting up to 5m0s for pod "client-containers-b2683124-e0ba-4ca7-a466-344420a6936b" in namespace "containers-847" to be "Succeeded or Failed" Apr 20 00:29:06.081: INFO: Pod "client-containers-b2683124-e0ba-4ca7-a466-344420a6936b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.759075ms Apr 20 00:29:08.091: INFO: Pod "client-containers-b2683124-e0ba-4ca7-a466-344420a6936b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028884578s Apr 20 00:29:10.095: INFO: Pod "client-containers-b2683124-e0ba-4ca7-a466-344420a6936b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033056575s STEP: Saw pod success Apr 20 00:29:10.095: INFO: Pod "client-containers-b2683124-e0ba-4ca7-a466-344420a6936b" satisfied condition "Succeeded or Failed" Apr 20 00:29:10.098: INFO: Trying to get logs from node latest-worker2 pod client-containers-b2683124-e0ba-4ca7-a466-344420a6936b container test-container: STEP: delete the pod Apr 20 00:29:10.139: INFO: Waiting for pod client-containers-b2683124-e0ba-4ca7-a466-344420a6936b to disappear Apr 20 00:29:10.141: INFO: Pod client-containers-b2683124-e0ba-4ca7-a466-344420a6936b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:29:10.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-847" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2958,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:29:10.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:29:10.264: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f99990b6-e105-4bf4-a4d9-2c62d96862b6", Controller:(*bool)(0xc005590bea), BlockOwnerDeletion:(*bool)(0xc005590beb)}} Apr 20 00:29:10.362: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"4939ac97-62a8-4325-8381-71bdc84b7242", Controller:(*bool)(0xc005519872), BlockOwnerDeletion:(*bool)(0xc005519873)}} Apr 20 00:29:10.366: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b0d7ff46-3028-41ad-aca6-275cc377827c", Controller:(*bool)(0xc0055b886a), BlockOwnerDeletion:(*bool)(0xc0055b886b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:29:15.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7346" for this suite. • [SLOW TEST:5.274 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":159,"skipped":3000,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:29:15.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-afb9db7b-13d7-41e7-80a7-79172b84373a STEP: Creating a pod to test consume configMaps Apr 20 00:29:15.537: INFO: Waiting up to 5m0s for pod "pod-configmaps-4838f892-7628-4dc8-b0f8-8443c11355e0" in namespace "configmap-5544" to be "Succeeded or Failed" Apr 20 00:29:15.541: INFO: Pod "pod-configmaps-4838f892-7628-4dc8-b0f8-8443c11355e0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.79344ms Apr 20 00:29:17.571: INFO: Pod "pod-configmaps-4838f892-7628-4dc8-b0f8-8443c11355e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034319414s Apr 20 00:29:19.575: INFO: Pod "pod-configmaps-4838f892-7628-4dc8-b0f8-8443c11355e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038460345s STEP: Saw pod success Apr 20 00:29:19.576: INFO: Pod "pod-configmaps-4838f892-7628-4dc8-b0f8-8443c11355e0" satisfied condition "Succeeded or Failed" Apr 20 00:29:19.579: INFO: Trying to get logs from node latest-worker pod pod-configmaps-4838f892-7628-4dc8-b0f8-8443c11355e0 container configmap-volume-test: STEP: delete the pod Apr 20 00:29:19.625: INFO: Waiting for pod pod-configmaps-4838f892-7628-4dc8-b0f8-8443c11355e0 to disappear Apr 20 00:29:19.631: INFO: Pod pod-configmaps-4838f892-7628-4dc8-b0f8-8443c11355e0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:29:19.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5544" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":3018,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:29:19.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-8rsl STEP: Creating a pod to test atomic-volume-subpath Apr 20 00:29:20.084: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-8rsl" in namespace "subpath-9473" to be "Succeeded or Failed" Apr 20 00:29:20.127: INFO: Pod "pod-subpath-test-downwardapi-8rsl": Phase="Pending", Reason="", readiness=false. Elapsed: 43.043751ms Apr 20 00:29:22.131: INFO: Pod "pod-subpath-test-downwardapi-8rsl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047060566s Apr 20 00:29:24.135: INFO: Pod "pod-subpath-test-downwardapi-8rsl": Phase="Running", Reason="", readiness=true. Elapsed: 4.05107241s Apr 20 00:29:26.139: INFO: Pod "pod-subpath-test-downwardapi-8rsl": Phase="Running", Reason="", readiness=true. Elapsed: 6.055094517s Apr 20 00:29:28.142: INFO: Pod "pod-subpath-test-downwardapi-8rsl": Phase="Running", Reason="", readiness=true. Elapsed: 8.05850037s Apr 20 00:29:30.146: INFO: Pod "pod-subpath-test-downwardapi-8rsl": Phase="Running", Reason="", readiness=true. Elapsed: 10.062511023s Apr 20 00:29:32.150: INFO: Pod "pod-subpath-test-downwardapi-8rsl": Phase="Running", Reason="", readiness=true. Elapsed: 12.066509344s Apr 20 00:29:34.154: INFO: Pod "pod-subpath-test-downwardapi-8rsl": Phase="Running", Reason="", readiness=true. Elapsed: 14.070437416s Apr 20 00:29:36.158: INFO: Pod "pod-subpath-test-downwardapi-8rsl": Phase="Running", Reason="", readiness=true. Elapsed: 16.074189055s Apr 20 00:29:38.161: INFO: Pod "pod-subpath-test-downwardapi-8rsl": Phase="Running", Reason="", readiness=true. Elapsed: 18.077397322s Apr 20 00:29:40.165: INFO: Pod "pod-subpath-test-downwardapi-8rsl": Phase="Running", Reason="", readiness=true. Elapsed: 20.080907898s Apr 20 00:29:42.168: INFO: Pod "pod-subpath-test-downwardapi-8rsl": Phase="Running", Reason="", readiness=true. Elapsed: 22.083924591s Apr 20 00:29:44.172: INFO: Pod "pod-subpath-test-downwardapi-8rsl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.087870557s STEP: Saw pod success Apr 20 00:29:44.172: INFO: Pod "pod-subpath-test-downwardapi-8rsl" satisfied condition "Succeeded or Failed" Apr 20 00:29:44.175: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-8rsl container test-container-subpath-downwardapi-8rsl: STEP: delete the pod Apr 20 00:29:44.231: INFO: Waiting for pod pod-subpath-test-downwardapi-8rsl to disappear Apr 20 00:29:44.236: INFO: Pod pod-subpath-test-downwardapi-8rsl no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-8rsl Apr 20 00:29:44.236: INFO: Deleting pod "pod-subpath-test-downwardapi-8rsl" in namespace "subpath-9473" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:29:44.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9473" for this suite. • [SLOW TEST:24.608 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":161,"skipped":3030,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:29:44.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 20 00:29:44.281: INFO: >>> kubeConfig: /root/.kube/config Apr 20 00:29:46.187: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:29:56.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9856" for this suite. • [SLOW TEST:12.856 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":162,"skipped":3044,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:29:57.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Apr 20 00:29:57.190: INFO: Waiting up to 5m0s for pod "pod-86cd4a54-5bcd-475f-ada3-b706a3af9826" in namespace "emptydir-1159" to be "Succeeded or Failed" Apr 20 00:29:57.236: INFO: Pod "pod-86cd4a54-5bcd-475f-ada3-b706a3af9826": Phase="Pending", Reason="", readiness=false. Elapsed: 45.30885ms Apr 20 00:29:59.240: INFO: Pod "pod-86cd4a54-5bcd-475f-ada3-b706a3af9826": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049564323s Apr 20 00:30:01.243: INFO: Pod "pod-86cd4a54-5bcd-475f-ada3-b706a3af9826": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053156972s STEP: Saw pod success Apr 20 00:30:01.244: INFO: Pod "pod-86cd4a54-5bcd-475f-ada3-b706a3af9826" satisfied condition "Succeeded or Failed" Apr 20 00:30:01.246: INFO: Trying to get logs from node latest-worker pod pod-86cd4a54-5bcd-475f-ada3-b706a3af9826 container test-container: STEP: delete the pod Apr 20 00:30:01.261: INFO: Waiting for pod pod-86cd4a54-5bcd-475f-ada3-b706a3af9826 to disappear Apr 20 00:30:01.264: INFO: Pod pod-86cd4a54-5bcd-475f-ada3-b706a3af9826 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:30:01.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1159" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":3057,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:30:01.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-b4n25 in namespace proxy-5922 I0420 00:30:01.379068 8 runners.go:190] Created replication controller with name: proxy-service-b4n25, namespace: proxy-5922, replica count: 1 I0420 00:30:02.429774 8 runners.go:190] proxy-service-b4n25 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 00:30:03.430028 8 runners.go:190] proxy-service-b4n25 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 00:30:04.430285 8 runners.go:190] proxy-service-b4n25 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 00:30:05.430509 8 runners.go:190] proxy-service-b4n25 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 00:30:06.430708 8 runners.go:190] proxy-service-b4n25 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 00:30:07.430909 8 runners.go:190] proxy-service-b4n25 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 00:30:08.431145 8 runners.go:190] proxy-service-b4n25 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 00:30:09.431364 8 runners.go:190] proxy-service-b4n25 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 00:30:10.431570 8 runners.go:190] proxy-service-b4n25 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 00:30:11.431778 8 runners.go:190] proxy-service-b4n25 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 00:30:12.432006 8 runners.go:190] proxy-service-b4n25 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 00:30:13.432268 8 runners.go:190] proxy-service-b4n25 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 20 00:30:13.436: INFO: setup took 12.109705113s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 20 00:30:13.447: INFO: (0) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname2/proxy/: bar (200; 11.399858ms) Apr 20 00:30:13.448: INFO: (0) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 11.837623ms) Apr 20 00:30:13.448: INFO: (0) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 12.051484ms) Apr 20 00:30:13.448: INFO: (0) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 12.267274ms) Apr 20 00:30:13.451: INFO: (0) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9/proxy/: test (200; 14.71924ms) Apr 20 00:30:13.451: INFO: (0) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname1/proxy/: foo (200; 14.820639ms) Apr 20 00:30:13.451: INFO: (0) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 14.74575ms) Apr 20 00:30:13.451: INFO: (0) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname1/proxy/: foo (200; 14.847344ms) Apr 20 00:30:13.451: INFO: (0) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:1080/proxy/: test<... (200; 15.201661ms) Apr 20 00:30:13.451: INFO: (0) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:1080/proxy/: ... (200; 15.251662ms) Apr 20 00:30:13.452: INFO: (0) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 15.497452ms) Apr 20 00:30:13.455: INFO: (0) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 18.921203ms) Apr 20 00:30:13.455: INFO: (0) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname1/proxy/: tls baz (200; 18.753292ms) Apr 20 00:30:13.457: INFO: (0) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: ... (200; 10.295311ms) Apr 20 00:30:13.468: INFO: (1) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 10.511624ms) Apr 20 00:30:13.468: INFO: (1) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:462/proxy/: tls qux (200; 10.459129ms) Apr 20 00:30:13.469: INFO: (1) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 10.974901ms) Apr 20 00:30:13.469: INFO: (1) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:1080/proxy/: test<... (200; 11.002055ms) Apr 20 00:30:13.469: INFO: (1) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9/proxy/: test (200; 11.125205ms) Apr 20 00:30:13.469: INFO: (1) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 11.023291ms) Apr 20 00:30:13.470: INFO: (1) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname1/proxy/: foo (200; 12.224607ms) Apr 20 00:30:13.470: INFO: (1) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname1/proxy/: foo (200; 12.26664ms) Apr 20 00:30:13.470: INFO: (1) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname1/proxy/: tls baz (200; 12.27133ms) Apr 20 00:30:13.470: INFO: (1) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 12.439024ms) Apr 20 00:30:13.470: INFO: (1) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname2/proxy/: tls qux (200; 12.400887ms) Apr 20 00:30:13.471: INFO: (1) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: test<... (200; 4.265025ms) Apr 20 00:30:13.475: INFO: (2) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:1080/proxy/: ... (200; 4.293055ms) Apr 20 00:30:13.475: INFO: (2) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 4.309915ms) Apr 20 00:30:13.475: INFO: (2) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 4.323368ms) Apr 20 00:30:13.475: INFO: (2) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: test (200; 5.06525ms) Apr 20 00:30:13.476: INFO: (2) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 5.159245ms) Apr 20 00:30:13.476: INFO: (2) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 5.056795ms) Apr 20 00:30:13.476: INFO: (2) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname1/proxy/: foo (200; 5.402157ms) Apr 20 00:30:13.476: INFO: (2) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 5.474337ms) Apr 20 00:30:13.476: INFO: (2) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname1/proxy/: foo (200; 5.438856ms) Apr 20 00:30:13.476: INFO: (2) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname2/proxy/: tls qux (200; 5.526585ms) Apr 20 00:30:13.476: INFO: (2) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname1/proxy/: tls baz (200; 5.553918ms) Apr 20 00:30:13.476: INFO: (2) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname2/proxy/: bar (200; 5.711512ms) Apr 20 00:30:13.478: INFO: (3) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9/proxy/: test (200; 2.184668ms) Apr 20 00:30:13.480: INFO: (3) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: ... (200; 4.686407ms) Apr 20 00:30:13.481: INFO: (3) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 4.663098ms) Apr 20 00:30:13.481: INFO: (3) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:1080/proxy/: test<... (200; 4.702649ms) Apr 20 00:30:13.481: INFO: (3) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 4.675578ms) Apr 20 00:30:13.482: INFO: (3) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname2/proxy/: tls qux (200; 5.394732ms) Apr 20 00:30:13.482: INFO: (3) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname2/proxy/: bar (200; 5.93233ms) Apr 20 00:30:13.482: INFO: (3) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname1/proxy/: foo (200; 6.015846ms) Apr 20 00:30:13.482: INFO: (3) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 5.988774ms) Apr 20 00:30:13.482: INFO: (3) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname1/proxy/: foo (200; 6.055422ms) Apr 20 00:30:13.482: INFO: (3) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname1/proxy/: tls baz (200; 6.106959ms) Apr 20 00:30:13.486: INFO: (4) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:1080/proxy/: ... (200; 3.622901ms) Apr 20 00:30:13.486: INFO: (4) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 3.657227ms) Apr 20 00:30:13.486: INFO: (4) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9/proxy/: test (200; 3.692154ms) Apr 20 00:30:13.486: INFO: (4) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 3.654748ms) Apr 20 00:30:13.486: INFO: (4) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 3.80568ms) Apr 20 00:30:13.487: INFO: (4) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname2/proxy/: tls qux (200; 4.114182ms) Apr 20 00:30:13.487: INFO: (4) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname1/proxy/: foo (200; 4.078001ms) Apr 20 00:30:13.487: INFO: (4) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 4.193154ms) Apr 20 00:30:13.487: INFO: (4) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname1/proxy/: tls baz (200; 4.364038ms) Apr 20 00:30:13.487: INFO: (4) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 4.412459ms) Apr 20 00:30:13.487: INFO: (4) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:462/proxy/: tls qux (200; 4.430445ms) Apr 20 00:30:13.487: INFO: (4) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname1/proxy/: foo (200; 4.462044ms) Apr 20 00:30:13.487: INFO: (4) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:1080/proxy/: test<... (200; 4.642859ms) Apr 20 00:30:13.487: INFO: (4) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: test (200; 4.04739ms) Apr 20 00:30:13.492: INFO: (5) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:1080/proxy/: ... (200; 4.096924ms) Apr 20 00:30:13.492: INFO: (5) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 4.079483ms) Apr 20 00:30:13.492: INFO: (5) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 4.06346ms) Apr 20 00:30:13.492: INFO: (5) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:1080/proxy/: test<... (200; 4.058397ms) Apr 20 00:30:13.492: INFO: (5) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:462/proxy/: tls qux (200; 4.100524ms) Apr 20 00:30:13.492: INFO: (5) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: ... (200; 3.28601ms) Apr 20 00:30:13.497: INFO: (6) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 4.491517ms) Apr 20 00:30:13.497: INFO: (6) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname2/proxy/: bar (200; 4.867791ms) Apr 20 00:30:13.497: INFO: (6) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 5.064119ms) Apr 20 00:30:13.497: INFO: (6) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname1/proxy/: foo (200; 5.187207ms) Apr 20 00:30:13.497: INFO: (6) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname1/proxy/: foo (200; 5.24046ms) Apr 20 00:30:13.497: INFO: (6) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname1/proxy/: tls baz (200; 5.270885ms) Apr 20 00:30:13.497: INFO: (6) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:462/proxy/: tls qux (200; 5.265286ms) Apr 20 00:30:13.497: INFO: (6) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 5.313264ms) Apr 20 00:30:13.497: INFO: (6) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:1080/proxy/: test<... (200; 5.323116ms) Apr 20 00:30:13.498: INFO: (6) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9/proxy/: test (200; 5.481783ms) Apr 20 00:30:13.498: INFO: (6) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 5.842293ms) Apr 20 00:30:13.498: INFO: (6) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: ... (200; 3.344824ms) Apr 20 00:30:13.502: INFO: (7) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 3.354764ms) Apr 20 00:30:13.502: INFO: (7) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:1080/proxy/: test<... (200; 3.437809ms) Apr 20 00:30:13.502: INFO: (7) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9/proxy/: test (200; 3.514394ms) Apr 20 00:30:13.502: INFO: (7) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 3.451736ms) Apr 20 00:30:13.502: INFO: (7) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: ... (200; 2.980497ms) Apr 20 00:30:13.506: INFO: (8) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 3.104883ms) Apr 20 00:30:13.507: INFO: (8) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9/proxy/: test (200; 3.885536ms) Apr 20 00:30:13.507: INFO: (8) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:462/proxy/: tls qux (200; 3.977396ms) Apr 20 00:30:13.507: INFO: (8) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: test<... (200; 4.043642ms) Apr 20 00:30:13.508: INFO: (8) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 4.335166ms) Apr 20 00:30:13.508: INFO: (8) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname1/proxy/: foo (200; 4.280183ms) Apr 20 00:30:13.508: INFO: (8) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname2/proxy/: bar (200; 4.368531ms) Apr 20 00:30:13.508: INFO: (8) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 4.464278ms) Apr 20 00:30:13.508: INFO: (8) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname1/proxy/: tls baz (200; 4.450299ms) Apr 20 00:30:13.508: INFO: (8) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname2/proxy/: tls qux (200; 4.59753ms) Apr 20 00:30:13.508: INFO: (8) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname1/proxy/: foo (200; 4.597169ms) Apr 20 00:30:13.508: INFO: (8) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 4.704935ms) Apr 20 00:30:13.510: INFO: (9) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:1080/proxy/: ... (200; 2.085637ms) Apr 20 00:30:13.512: INFO: (9) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname2/proxy/: tls qux (200; 3.865399ms) Apr 20 00:30:13.512: INFO: (9) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname2/proxy/: bar (200; 4.155357ms) Apr 20 00:30:13.512: INFO: (9) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 4.351282ms) Apr 20 00:30:13.512: INFO: (9) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname1/proxy/: foo (200; 4.336287ms) Apr 20 00:30:13.512: INFO: (9) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname1/proxy/: tls baz (200; 4.369704ms) Apr 20 00:30:13.512: INFO: (9) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: test<... (200; 4.890936ms) Apr 20 00:30:13.513: INFO: (9) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9/proxy/: test (200; 4.999092ms) Apr 20 00:30:13.513: INFO: (9) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 5.09261ms) Apr 20 00:30:13.517: INFO: (10) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:1080/proxy/: ... (200; 4.305368ms) Apr 20 00:30:13.517: INFO: (10) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 4.311547ms) Apr 20 00:30:13.517: INFO: (10) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: test (200; 4.356955ms) Apr 20 00:30:13.518: INFO: (10) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 4.333568ms) Apr 20 00:30:13.518: INFO: (10) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 4.385954ms) Apr 20 00:30:13.518: INFO: (10) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 4.465608ms) Apr 20 00:30:13.518: INFO: (10) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:1080/proxy/: test<... (200; 4.524674ms) Apr 20 00:30:13.518: INFO: (10) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:462/proxy/: tls qux (200; 4.584444ms) Apr 20 00:30:13.519: INFO: (10) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname2/proxy/: tls qux (200; 5.331647ms) Apr 20 00:30:13.519: INFO: (10) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname1/proxy/: foo (200; 5.40293ms) Apr 20 00:30:13.519: INFO: (10) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname2/proxy/: bar (200; 5.458011ms) Apr 20 00:30:13.519: INFO: (10) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 5.484616ms) Apr 20 00:30:13.519: INFO: (10) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname1/proxy/: foo (200; 5.479318ms) Apr 20 00:30:13.519: INFO: (10) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname1/proxy/: tls baz (200; 5.551413ms) Apr 20 00:30:13.521: INFO: (11) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9/proxy/: test (200; 2.644144ms) Apr 20 00:30:13.521: INFO: (11) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:1080/proxy/: ... (200; 2.71044ms) Apr 20 00:30:13.521: INFO: (11) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 2.699235ms) Apr 20 00:30:13.523: INFO: (11) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:462/proxy/: tls qux (200; 3.751445ms) Apr 20 00:30:13.523: INFO: (11) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: test<... (200; 3.969832ms) Apr 20 00:30:13.523: INFO: (11) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 3.987546ms) Apr 20 00:30:13.524: INFO: (11) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname1/proxy/: tls baz (200; 5.249833ms) Apr 20 00:30:13.524: INFO: (11) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname1/proxy/: foo (200; 5.33154ms) Apr 20 00:30:13.524: INFO: (11) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname2/proxy/: tls qux (200; 5.2627ms) Apr 20 00:30:13.524: INFO: (11) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname2/proxy/: bar (200; 5.329647ms) Apr 20 00:30:13.524: INFO: (11) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 5.261352ms) Apr 20 00:30:13.524: INFO: (11) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname1/proxy/: foo (200; 5.362283ms) Apr 20 00:30:13.527: INFO: (12) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9/proxy/: test (200; 3.196326ms) Apr 20 00:30:13.528: INFO: (12) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 3.852351ms) Apr 20 00:30:13.529: INFO: (12) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: ... (200; 4.325447ms) Apr 20 00:30:13.529: INFO: (12) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 4.371732ms) Apr 20 00:30:13.529: INFO: (12) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:462/proxy/: tls qux (200; 4.561196ms) Apr 20 00:30:13.529: INFO: (12) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:1080/proxy/: test<... (200; 4.595399ms) Apr 20 00:30:13.529: INFO: (12) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 4.873283ms) Apr 20 00:30:13.529: INFO: (12) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 4.823792ms) Apr 20 00:30:13.529: INFO: (12) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname2/proxy/: tls qux (200; 5.113071ms) Apr 20 00:30:13.530: INFO: (12) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname1/proxy/: tls baz (200; 5.33816ms) Apr 20 00:30:13.530: INFO: (12) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname2/proxy/: bar (200; 5.385711ms) Apr 20 00:30:13.530: INFO: (12) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname1/proxy/: foo (200; 5.420861ms) Apr 20 00:30:13.530: INFO: (12) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 5.358894ms) Apr 20 00:30:13.531: INFO: (12) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname1/proxy/: foo (200; 6.304287ms) Apr 20 00:30:13.536: INFO: (13) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:1080/proxy/: ... (200; 5.375353ms) Apr 20 00:30:13.540: INFO: (13) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 8.933589ms) Apr 20 00:30:13.540: INFO: (13) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:1080/proxy/: test<... (200; 8.892042ms) Apr 20 00:30:13.540: INFO: (13) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 8.876875ms) Apr 20 00:30:13.540: INFO: (13) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 8.909694ms) Apr 20 00:30:13.540: INFO: (13) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9/proxy/: test (200; 8.955936ms) Apr 20 00:30:13.540: INFO: (13) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 9.037698ms) Apr 20 00:30:13.540: INFO: (13) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:462/proxy/: tls qux (200; 9.060039ms) Apr 20 00:30:13.540: INFO: (13) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: ... (200; 4.0284ms) Apr 20 00:30:13.546: INFO: (14) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 4.278411ms) Apr 20 00:30:13.546: INFO: (14) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 4.27236ms) Apr 20 00:30:13.546: INFO: (14) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: test<... (200; 4.409867ms) Apr 20 00:30:13.546: INFO: (14) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 4.458674ms) Apr 20 00:30:13.546: INFO: (14) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9/proxy/: test (200; 4.602444ms) Apr 20 00:30:13.546: INFO: (14) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 4.607866ms) Apr 20 00:30:13.547: INFO: (14) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname1/proxy/: foo (200; 4.882902ms) Apr 20 00:30:13.547: INFO: (14) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:462/proxy/: tls qux (200; 4.983508ms) Apr 20 00:30:13.547: INFO: (14) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 4.980799ms) Apr 20 00:30:13.547: INFO: (14) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname2/proxy/: tls qux (200; 5.477386ms) Apr 20 00:30:13.547: INFO: (14) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname2/proxy/: bar (200; 5.446079ms) Apr 20 00:30:13.547: INFO: (14) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname1/proxy/: foo (200; 5.534176ms) Apr 20 00:30:13.547: INFO: (14) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname1/proxy/: tls baz (200; 5.704408ms) Apr 20 00:30:13.551: INFO: (15) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:1080/proxy/: ... (200; 3.779079ms) Apr 20 00:30:13.552: INFO: (15) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname2/proxy/: bar (200; 4.273067ms) Apr 20 00:30:13.552: INFO: (15) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname1/proxy/: foo (200; 4.843532ms) Apr 20 00:30:13.552: INFO: (15) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 4.784123ms) Apr 20 00:30:13.552: INFO: (15) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 4.79512ms) Apr 20 00:30:13.552: INFO: (15) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: test (200; 4.841238ms) Apr 20 00:30:13.552: INFO: (15) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname1/proxy/: tls baz (200; 4.934252ms) Apr 20 00:30:13.552: INFO: (15) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:1080/proxy/: test<... (200; 4.906636ms) Apr 20 00:30:13.552: INFO: (15) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:462/proxy/: tls qux (200; 4.918525ms) Apr 20 00:30:13.552: INFO: (15) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname1/proxy/: foo (200; 4.900386ms) Apr 20 00:30:13.552: INFO: (15) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 4.912822ms) Apr 20 00:30:13.552: INFO: (15) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 4.930294ms) Apr 20 00:30:13.552: INFO: (15) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 4.934522ms) Apr 20 00:30:13.556: INFO: (16) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:1080/proxy/: test<... (200; 3.34061ms) Apr 20 00:30:13.556: INFO: (16) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 4.052709ms) Apr 20 00:30:13.556: INFO: (16) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:462/proxy/: tls qux (200; 3.708084ms) Apr 20 00:30:13.557: INFO: (16) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:1080/proxy/: ... (200; 4.112595ms) Apr 20 00:30:13.557: INFO: (16) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname2/proxy/: bar (200; 4.335205ms) Apr 20 00:30:13.557: INFO: (16) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9/proxy/: test (200; 4.325941ms) Apr 20 00:30:13.557: INFO: (16) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname1/proxy/: foo (200; 4.410693ms) Apr 20 00:30:13.557: INFO: (16) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname1/proxy/: tls baz (200; 4.453278ms) Apr 20 00:30:13.557: INFO: (16) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 4.543408ms) Apr 20 00:30:13.558: INFO: (16) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname1/proxy/: foo (200; 4.778857ms) Apr 20 00:30:13.558: INFO: (16) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 5.042821ms) Apr 20 00:30:13.558: INFO: (16) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 5.146977ms) Apr 20 00:30:13.558: INFO: (16) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 4.855724ms) Apr 20 00:30:13.558: INFO: (16) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 4.854399ms) Apr 20 00:30:13.558: INFO: (16) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname2/proxy/: tls qux (200; 4.943483ms) Apr 20 00:30:13.558: INFO: (16) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: test (200; 3.871513ms) Apr 20 00:30:13.562: INFO: (17) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: ... (200; 3.924605ms) Apr 20 00:30:13.564: INFO: (17) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 5.623923ms) Apr 20 00:30:13.564: INFO: (17) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 5.696128ms) Apr 20 00:30:13.564: INFO: (17) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 5.677158ms) Apr 20 00:30:13.564: INFO: (17) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname1/proxy/: foo (200; 5.667733ms) Apr 20 00:30:13.564: INFO: (17) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname2/proxy/: tls qux (200; 5.694834ms) Apr 20 00:30:13.564: INFO: (17) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname1/proxy/: foo (200; 5.781623ms) Apr 20 00:30:13.564: INFO: (17) /api/v1/namespaces/proxy-5922/services/proxy-service-b4n25:portname2/proxy/: bar (200; 5.785727ms) Apr 20 00:30:13.564: INFO: (17) /api/v1/namespaces/proxy-5922/services/http:proxy-service-b4n25:portname2/proxy/: bar (200; 5.822705ms) Apr 20 00:30:13.564: INFO: (17) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:1080/proxy/: test<... (200; 5.795437ms) Apr 20 00:30:13.564: INFO: (17) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 5.957798ms) Apr 20 00:30:13.564: INFO: (17) /api/v1/namespaces/proxy-5922/services/https:proxy-service-b4n25:tlsportname1/proxy/: tls baz (200; 6.25505ms) Apr 20 00:30:13.568: INFO: (18) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:1080/proxy/: test<... (200; 2.938317ms) Apr 20 00:30:13.568: INFO: (18) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 2.962262ms) Apr 20 00:30:13.568: INFO: (18) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 2.856995ms) Apr 20 00:30:13.568: INFO: (18) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 3.091335ms) Apr 20 00:30:13.568: INFO: (18) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9/proxy/: test (200; 3.04191ms) Apr 20 00:30:13.568: INFO: (18) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:1080/proxy/: ... (200; 3.066306ms) Apr 20 00:30:13.568: INFO: (18) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 3.06161ms) Apr 20 00:30:13.568: INFO: (18) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: ... (200; 2.225841ms) Apr 20 00:30:13.572: INFO: (19) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 2.604446ms) Apr 20 00:30:13.572: INFO: (19) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:1080/proxy/: test<... (200; 2.911831ms) Apr 20 00:30:13.573: INFO: (19) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:460/proxy/: tls baz (200; 3.010055ms) Apr 20 00:30:13.573: INFO: (19) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9/proxy/: test (200; 2.993542ms) Apr 20 00:30:13.573: INFO: (19) /api/v1/namespaces/proxy-5922/pods/proxy-service-b4n25-5cdd9:162/proxy/: bar (200; 3.076861ms) Apr 20 00:30:13.573: INFO: (19) /api/v1/namespaces/proxy-5922/pods/http:proxy-service-b4n25-5cdd9:160/proxy/: foo (200; 3.039548ms) Apr 20 00:30:13.573: INFO: (19) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:462/proxy/: tls qux (200; 3.037098ms) Apr 20 00:30:13.573: INFO: (19) /api/v1/namespaces/proxy-5922/pods/https:proxy-service-b4n25-5cdd9:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-6b63ca69-d545-4422-9b3b-ed2836df6d60 STEP: Creating a pod to test consume configMaps Apr 20 00:30:23.054: INFO: Waiting up to 5m0s for pod "pod-configmaps-ed48f26a-57e9-4aea-b36f-bc3a34f761f0" in namespace "configmap-598" to be "Succeeded or Failed" Apr 20 00:30:23.057: INFO: Pod "pod-configmaps-ed48f26a-57e9-4aea-b36f-bc3a34f761f0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.337343ms Apr 20 00:30:25.061: INFO: Pod "pod-configmaps-ed48f26a-57e9-4aea-b36f-bc3a34f761f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006851388s Apr 20 00:30:27.064: INFO: Pod "pod-configmaps-ed48f26a-57e9-4aea-b36f-bc3a34f761f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01003107s STEP: Saw pod success Apr 20 00:30:27.064: INFO: Pod "pod-configmaps-ed48f26a-57e9-4aea-b36f-bc3a34f761f0" satisfied condition "Succeeded or Failed" Apr 20 00:30:27.066: INFO: Trying to get logs from node latest-worker pod pod-configmaps-ed48f26a-57e9-4aea-b36f-bc3a34f761f0 container configmap-volume-test: STEP: delete the pod Apr 20 00:30:27.095: INFO: Waiting for pod pod-configmaps-ed48f26a-57e9-4aea-b36f-bc3a34f761f0 to disappear Apr 20 00:30:27.105: INFO: Pod pod-configmaps-ed48f26a-57e9-4aea-b36f-bc3a34f761f0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:30:27.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-598" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":3071,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:30:27.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 20 00:30:27.208: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 20 00:30:38.657: INFO: >>> kubeConfig: /root/.kube/config Apr 20 00:30:40.564: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:30:52.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9100" for this suite. • [SLOW TEST:24.963 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":166,"skipped":3088,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:30:52.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 20 00:30:56.324: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:30:56.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1305" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":3091,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:30:56.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-181 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 20 00:30:56.472: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 20 00:30:56.531: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 20 00:30:58.535: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 20 00:31:00.535: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:31:02.535: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:31:04.536: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:31:06.535: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:31:08.536: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:31:10.536: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 20 00:31:10.542: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 20 00:31:14.564: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.128:8080/dial?request=hostname&protocol=udp&host=10.244.2.222&port=8081&tries=1'] Namespace:pod-network-test-181 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:31:14.564: INFO: >>> kubeConfig: /root/.kube/config I0420 00:31:14.596457 8 log.go:172] (0xc0013702c0) (0xc0022040a0) Create stream I0420 00:31:14.596479 8 log.go:172] (0xc0013702c0) (0xc0022040a0) Stream added, broadcasting: 1 I0420 00:31:14.598860 8 log.go:172] (0xc0013702c0) Reply frame received for 1 I0420 00:31:14.598919 8 log.go:172] (0xc0013702c0) (0xc0002cbcc0) Create stream I0420 00:31:14.598938 8 log.go:172] (0xc0013702c0) (0xc0002cbcc0) Stream added, broadcasting: 3 I0420 00:31:14.599820 8 log.go:172] (0xc0013702c0) Reply frame received for 3 I0420 00:31:14.599873 8 log.go:172] (0xc0013702c0) (0xc002204140) Create stream I0420 00:31:14.599891 8 log.go:172] (0xc0013702c0) (0xc002204140) Stream added, broadcasting: 5 I0420 00:31:14.600766 8 log.go:172] (0xc0013702c0) Reply frame received for 5 I0420 00:31:14.708997 8 log.go:172] (0xc0013702c0) Data frame received for 3 I0420 00:31:14.709043 8 log.go:172] (0xc0002cbcc0) (3) Data frame handling I0420 00:31:14.709075 8 log.go:172] (0xc0002cbcc0) (3) Data frame sent I0420 00:31:14.709672 8 log.go:172] (0xc0013702c0) Data frame received for 5 I0420 00:31:14.709699 8 log.go:172] (0xc002204140) (5) Data frame handling I0420 00:31:14.709814 8 log.go:172] (0xc0013702c0) Data frame received for 3 I0420 00:31:14.709832 8 log.go:172] (0xc0002cbcc0) (3) Data frame handling I0420 00:31:14.711378 8 log.go:172] (0xc0013702c0) Data frame received for 1 I0420 00:31:14.711462 8 log.go:172] (0xc0022040a0) (1) Data frame handling I0420 00:31:14.711506 8 log.go:172] (0xc0022040a0) (1) Data frame sent I0420 00:31:14.711550 8 log.go:172] (0xc0013702c0) (0xc0022040a0) Stream removed, broadcasting: 1 I0420 00:31:14.711582 8 log.go:172] (0xc0013702c0) Go away received I0420 00:31:14.711687 8 log.go:172] (0xc0013702c0) (0xc0022040a0) Stream removed, broadcasting: 1 I0420 00:31:14.711711 8 log.go:172] (0xc0013702c0) (0xc0002cbcc0) Stream removed, broadcasting: 3 I0420 00:31:14.711723 8 log.go:172] (0xc0013702c0) (0xc002204140) Stream removed, broadcasting: 5 Apr 20 00:31:14.711: INFO: Waiting for responses: map[] Apr 20 00:31:14.715: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.128:8080/dial?request=hostname&protocol=udp&host=10.244.1.127&port=8081&tries=1'] Namespace:pod-network-test-181 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:31:14.715: INFO: >>> kubeConfig: /root/.kube/config I0420 00:31:14.752019 8 log.go:172] (0xc000c16420) (0xc001f8e5a0) Create stream I0420 00:31:14.752058 8 log.go:172] (0xc000c16420) (0xc001f8e5a0) Stream added, broadcasting: 1 I0420 00:31:14.754043 8 log.go:172] (0xc000c16420) Reply frame received for 1 I0420 00:31:14.754093 8 log.go:172] (0xc000c16420) (0xc0022041e0) Create stream I0420 00:31:14.754108 8 log.go:172] (0xc000c16420) (0xc0022041e0) Stream added, broadcasting: 3 I0420 00:31:14.755150 8 log.go:172] (0xc000c16420) Reply frame received for 3 I0420 00:31:14.755197 8 log.go:172] (0xc000c16420) (0xc00055f5e0) Create stream I0420 00:31:14.755214 8 log.go:172] (0xc000c16420) (0xc00055f5e0) Stream added, broadcasting: 5 I0420 00:31:14.756210 8 log.go:172] (0xc000c16420) Reply frame received for 5 I0420 00:31:14.882020 8 log.go:172] (0xc000c16420) Data frame received for 5 I0420 00:31:14.882044 8 log.go:172] (0xc00055f5e0) (5) Data frame handling I0420 00:31:14.882074 8 log.go:172] (0xc000c16420) Data frame received for 3 I0420 00:31:14.882090 8 log.go:172] (0xc0022041e0) (3) Data frame handling I0420 00:31:14.882100 8 log.go:172] (0xc0022041e0) (3) Data frame sent I0420 00:31:14.882131 8 log.go:172] (0xc000c16420) Data frame received for 3 I0420 00:31:14.882149 8 log.go:172] (0xc0022041e0) (3) Data frame handling I0420 00:31:14.883356 8 log.go:172] (0xc000c16420) Data frame received for 1 I0420 00:31:14.883372 8 log.go:172] (0xc001f8e5a0) (1) Data frame handling I0420 00:31:14.883381 8 log.go:172] (0xc001f8e5a0) (1) Data frame sent I0420 00:31:14.883390 8 log.go:172] (0xc000c16420) (0xc001f8e5a0) Stream removed, broadcasting: 1 I0420 00:31:14.883435 8 log.go:172] (0xc000c16420) Go away received I0420 00:31:14.883505 8 log.go:172] (0xc000c16420) (0xc001f8e5a0) Stream removed, broadcasting: 1 I0420 00:31:14.883522 8 log.go:172] (0xc000c16420) (0xc0022041e0) Stream removed, broadcasting: 3 I0420 00:31:14.883529 8 log.go:172] (0xc000c16420) (0xc00055f5e0) Stream removed, broadcasting: 5 Apr 20 00:31:14.883: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:31:14.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-181" for this suite. • [SLOW TEST:18.512 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":168,"skipped":3096,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:31:14.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:31:15.002: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 20 00:31:16.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9888 create -f -' Apr 20 00:31:20.013: INFO: stderr: "" Apr 20 00:31:20.013: INFO: stdout: "e2e-test-crd-publish-openapi-9994-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 20 00:31:20.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9888 delete e2e-test-crd-publish-openapi-9994-crds test-cr' Apr 20 00:31:20.478: INFO: stderr: "" Apr 20 00:31:20.478: INFO: stdout: "e2e-test-crd-publish-openapi-9994-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 20 00:31:20.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9888 apply -f -' Apr 20 00:31:21.274: INFO: stderr: "" Apr 20 00:31:21.274: INFO: stdout: "e2e-test-crd-publish-openapi-9994-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 20 00:31:21.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9888 delete e2e-test-crd-publish-openapi-9994-crds test-cr' Apr 20 00:31:21.503: INFO: stderr: "" Apr 20 00:31:21.503: INFO: stdout: "e2e-test-crd-publish-openapi-9994-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 20 00:31:21.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9994-crds' Apr 20 00:31:22.035: INFO: stderr: "" Apr 20 00:31:22.035: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9994-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:31:25.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9888" for this suite. • [SLOW TEST:10.196 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":169,"skipped":3119,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:31:25.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 20 00:31:25.694: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 20 00:31:27.704: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939485, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939485, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939485, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939485, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 20 00:31:30.727: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:31:30.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:31:31.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5602" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.849 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":170,"skipped":3121,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:31:31.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1100 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-1100 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1100 Apr 20 00:31:32.039: INFO: Found 0 stateful pods, waiting for 1 Apr 20 00:31:42.777: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 20 00:31:42.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1100 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 20 00:31:43.024: INFO: stderr: "I0420 00:31:42.906451 2069 log.go:172] (0xc00003ab00) (0xc00058c0a0) Create stream\nI0420 00:31:42.906523 2069 log.go:172] (0xc00003ab00) (0xc00058c0a0) Stream added, broadcasting: 1\nI0420 00:31:42.909253 2069 log.go:172] (0xc00003ab00) Reply frame received for 1\nI0420 00:31:42.909292 2069 log.go:172] (0xc00003ab00) (0xc0009ec000) Create stream\nI0420 00:31:42.909304 2069 log.go:172] (0xc00003ab00) (0xc0009ec000) Stream added, broadcasting: 3\nI0420 00:31:42.910278 2069 log.go:172] (0xc00003ab00) Reply frame received for 3\nI0420 00:31:42.910309 2069 log.go:172] (0xc00003ab00) (0xc00058c1e0) Create stream\nI0420 00:31:42.910324 2069 log.go:172] (0xc00003ab00) (0xc00058c1e0) Stream added, broadcasting: 5\nI0420 00:31:42.911235 2069 log.go:172] (0xc00003ab00) Reply frame received for 5\nI0420 00:31:42.991318 2069 log.go:172] (0xc00003ab00) Data frame received for 5\nI0420 00:31:42.991344 2069 log.go:172] (0xc00058c1e0) (5) Data frame handling\nI0420 00:31:42.991363 2069 log.go:172] (0xc00058c1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0420 00:31:43.015623 2069 log.go:172] (0xc00003ab00) Data frame received for 3\nI0420 00:31:43.015661 2069 log.go:172] (0xc0009ec000) (3) Data frame handling\nI0420 00:31:43.015698 2069 log.go:172] (0xc0009ec000) (3) Data frame sent\nI0420 00:31:43.015731 2069 log.go:172] (0xc00003ab00) Data frame received for 3\nI0420 00:31:43.015749 2069 log.go:172] (0xc0009ec000) (3) Data frame handling\nI0420 00:31:43.015879 2069 log.go:172] (0xc00003ab00) Data frame received for 5\nI0420 00:31:43.015914 2069 log.go:172] (0xc00058c1e0) (5) Data frame handling\nI0420 00:31:43.018124 2069 log.go:172] (0xc00003ab00) Data frame received for 1\nI0420 00:31:43.018149 2069 log.go:172] (0xc00058c0a0) (1) Data frame handling\nI0420 00:31:43.018173 2069 log.go:172] (0xc00058c0a0) (1) Data frame sent\nI0420 00:31:43.018187 2069 log.go:172] (0xc00003ab00) (0xc00058c0a0) Stream removed, broadcasting: 1\nI0420 00:31:43.018427 2069 log.go:172] (0xc00003ab00) Go away received\nI0420 00:31:43.018720 2069 log.go:172] (0xc00003ab00) (0xc00058c0a0) Stream removed, broadcasting: 1\nI0420 00:31:43.018759 2069 log.go:172] (0xc00003ab00) (0xc0009ec000) Stream removed, broadcasting: 3\nI0420 00:31:43.018779 2069 log.go:172] (0xc00003ab00) (0xc00058c1e0) Stream removed, broadcasting: 5\n" Apr 20 00:31:43.024: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 20 00:31:43.024: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 20 00:31:43.028: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 20 00:31:53.033: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 20 00:31:53.033: INFO: Waiting for statefulset status.replicas updated to 0 Apr 20 00:31:53.046: INFO: POD NODE PHASE GRACE CONDITIONS Apr 20 00:31:53.046: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC }] Apr 20 00:31:53.046: INFO: Apr 20 00:31:53.046: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 20 00:31:54.052: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996364576s Apr 20 00:31:55.056: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991126231s Apr 20 00:31:56.062: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.98612845s Apr 20 00:31:57.070: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.980859391s Apr 20 00:31:58.075: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97269677s Apr 20 00:31:59.081: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967778846s Apr 20 00:32:00.085: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.961972721s Apr 20 00:32:01.089: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.958146578s Apr 20 00:32:02.094: INFO: Verifying statefulset ss doesn't scale past 3 for another 953.448439ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1100 Apr 20 00:32:03.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1100 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 20 00:32:03.328: INFO: stderr: "I0420 00:32:03.237232 2092 log.go:172] (0xc000778a50) (0xc000760140) Create stream\nI0420 00:32:03.237294 2092 log.go:172] (0xc000778a50) (0xc000760140) Stream added, broadcasting: 1\nI0420 00:32:03.239755 2092 log.go:172] (0xc000778a50) Reply frame received for 1\nI0420 00:32:03.239816 2092 log.go:172] (0xc000778a50) (0xc00092c000) Create stream\nI0420 00:32:03.239847 2092 log.go:172] (0xc000778a50) (0xc00092c000) Stream added, broadcasting: 3\nI0420 00:32:03.240715 2092 log.go:172] (0xc000778a50) Reply frame received for 3\nI0420 00:32:03.240758 2092 log.go:172] (0xc000778a50) (0xc0006bd220) Create stream\nI0420 00:32:03.240773 2092 log.go:172] (0xc000778a50) (0xc0006bd220) Stream added, broadcasting: 5\nI0420 00:32:03.241852 2092 log.go:172] (0xc000778a50) Reply frame received for 5\nI0420 00:32:03.322246 2092 log.go:172] (0xc000778a50) Data frame received for 3\nI0420 00:32:03.322303 2092 log.go:172] (0xc00092c000) (3) Data frame handling\nI0420 00:32:03.322317 2092 log.go:172] (0xc00092c000) (3) Data frame sent\nI0420 00:32:03.322326 2092 log.go:172] (0xc000778a50) Data frame received for 3\nI0420 00:32:03.322333 2092 log.go:172] (0xc00092c000) (3) Data frame handling\nI0420 00:32:03.322346 2092 log.go:172] (0xc000778a50) Data frame received for 5\nI0420 00:32:03.322353 2092 log.go:172] (0xc0006bd220) (5) Data frame handling\nI0420 00:32:03.322361 2092 log.go:172] (0xc0006bd220) (5) Data frame sent\nI0420 00:32:03.322370 2092 log.go:172] (0xc000778a50) Data frame received for 5\nI0420 00:32:03.322382 2092 log.go:172] (0xc0006bd220) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0420 00:32:03.323969 2092 log.go:172] (0xc000778a50) Data frame received for 1\nI0420 00:32:03.324001 2092 log.go:172] (0xc000760140) (1) Data frame handling\nI0420 00:32:03.324019 2092 log.go:172] (0xc000760140) (1) Data frame sent\nI0420 00:32:03.324034 2092 log.go:172] (0xc000778a50) (0xc000760140) Stream removed, broadcasting: 1\nI0420 00:32:03.324049 2092 log.go:172] (0xc000778a50) Go away received\nI0420 00:32:03.324518 2092 log.go:172] (0xc000778a50) (0xc000760140) Stream removed, broadcasting: 1\nI0420 00:32:03.324535 2092 log.go:172] (0xc000778a50) (0xc00092c000) Stream removed, broadcasting: 3\nI0420 00:32:03.324543 2092 log.go:172] (0xc000778a50) (0xc0006bd220) Stream removed, broadcasting: 5\n" Apr 20 00:32:03.328: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 20 00:32:03.328: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 20 00:32:03.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1100 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 20 00:32:03.550: INFO: stderr: "I0420 00:32:03.465645 2114 log.go:172] (0xc0005388f0) (0xc0006d72c0) Create stream\nI0420 00:32:03.465724 2114 log.go:172] (0xc0005388f0) (0xc0006d72c0) Stream added, broadcasting: 1\nI0420 00:32:03.471810 2114 log.go:172] (0xc0005388f0) Reply frame received for 1\nI0420 00:32:03.471846 2114 log.go:172] (0xc0005388f0) (0xc0008fe000) Create stream\nI0420 00:32:03.471861 2114 log.go:172] (0xc0005388f0) (0xc0008fe000) Stream added, broadcasting: 3\nI0420 00:32:03.472596 2114 log.go:172] (0xc0005388f0) Reply frame received for 3\nI0420 00:32:03.472624 2114 log.go:172] (0xc0005388f0) (0xc0006d74a0) Create stream\nI0420 00:32:03.472640 2114 log.go:172] (0xc0005388f0) (0xc0006d74a0) Stream added, broadcasting: 5\nI0420 00:32:03.473441 2114 log.go:172] (0xc0005388f0) Reply frame received for 5\nI0420 00:32:03.541899 2114 log.go:172] (0xc0005388f0) Data frame received for 5\nI0420 00:32:03.541947 2114 log.go:172] (0xc0006d74a0) (5) Data frame handling\nI0420 00:32:03.541968 2114 log.go:172] (0xc0006d74a0) (5) Data frame sent\nI0420 00:32:03.541984 2114 log.go:172] (0xc0005388f0) Data frame received for 3\nI0420 00:32:03.541997 2114 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0420 00:32:03.542014 2114 log.go:172] (0xc0008fe000) (3) Data frame sent\nI0420 00:32:03.542029 2114 log.go:172] (0xc0005388f0) Data frame received for 3\nI0420 00:32:03.542043 2114 log.go:172] (0xc0008fe000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0420 00:32:03.542191 2114 log.go:172] (0xc0005388f0) Data frame received for 5\nI0420 00:32:03.542213 2114 log.go:172] (0xc0006d74a0) (5) Data frame handling\nI0420 00:32:03.544310 2114 log.go:172] (0xc0005388f0) Data frame received for 1\nI0420 00:32:03.544339 2114 log.go:172] (0xc0006d72c0) (1) Data frame handling\nI0420 00:32:03.544354 2114 log.go:172] (0xc0006d72c0) (1) Data frame sent\nI0420 00:32:03.544391 2114 log.go:172] (0xc0005388f0) (0xc0006d72c0) Stream removed, broadcasting: 1\nI0420 00:32:03.544412 2114 log.go:172] (0xc0005388f0) Go away received\nI0420 00:32:03.544868 2114 log.go:172] (0xc0005388f0) (0xc0006d72c0) Stream removed, broadcasting: 1\nI0420 00:32:03.544893 2114 log.go:172] (0xc0005388f0) (0xc0008fe000) Stream removed, broadcasting: 3\nI0420 00:32:03.544918 2114 log.go:172] (0xc0005388f0) (0xc0006d74a0) Stream removed, broadcasting: 5\n" Apr 20 00:32:03.550: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 20 00:32:03.550: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 20 00:32:03.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1100 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 20 00:32:03.753: INFO: stderr: "I0420 00:32:03.672621 2137 log.go:172] (0xc000a3c000) (0xc0005cf720) Create stream\nI0420 00:32:03.672685 2137 log.go:172] (0xc000a3c000) (0xc0005cf720) Stream added, broadcasting: 1\nI0420 00:32:03.675174 2137 log.go:172] (0xc000a3c000) Reply frame received for 1\nI0420 00:32:03.675227 2137 log.go:172] (0xc000a3c000) (0xc000438b40) Create stream\nI0420 00:32:03.675247 2137 log.go:172] (0xc000a3c000) (0xc000438b40) Stream added, broadcasting: 3\nI0420 00:32:03.676128 2137 log.go:172] (0xc000a3c000) Reply frame received for 3\nI0420 00:32:03.676158 2137 log.go:172] (0xc000a3c000) (0xc000a5e000) Create stream\nI0420 00:32:03.676168 2137 log.go:172] (0xc000a3c000) (0xc000a5e000) Stream added, broadcasting: 5\nI0420 00:32:03.676847 2137 log.go:172] (0xc000a3c000) Reply frame received for 5\nI0420 00:32:03.745716 2137 log.go:172] (0xc000a3c000) Data frame received for 3\nI0420 00:32:03.745750 2137 log.go:172] (0xc000438b40) (3) Data frame handling\nI0420 00:32:03.745774 2137 log.go:172] (0xc000438b40) (3) Data frame sent\nI0420 00:32:03.745795 2137 log.go:172] (0xc000a3c000) Data frame received for 3\nI0420 00:32:03.745808 2137 log.go:172] (0xc000438b40) (3) Data frame handling\nI0420 00:32:03.745852 2137 log.go:172] (0xc000a3c000) Data frame received for 5\nI0420 00:32:03.745897 2137 log.go:172] (0xc000a5e000) (5) Data frame handling\nI0420 00:32:03.745918 2137 log.go:172] (0xc000a5e000) (5) Data frame sent\nI0420 00:32:03.745941 2137 log.go:172] (0xc000a3c000) Data frame received for 5\nI0420 00:32:03.745961 2137 log.go:172] (0xc000a5e000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0420 00:32:03.747467 2137 log.go:172] (0xc000a3c000) Data frame received for 1\nI0420 00:32:03.747489 2137 log.go:172] (0xc0005cf720) (1) Data frame handling\nI0420 00:32:03.747529 2137 log.go:172] (0xc0005cf720) (1) Data frame sent\nI0420 00:32:03.747573 2137 log.go:172] (0xc000a3c000) (0xc0005cf720) Stream removed, broadcasting: 1\nI0420 00:32:03.747662 2137 log.go:172] (0xc000a3c000) Go away received\nI0420 00:32:03.748042 2137 log.go:172] (0xc000a3c000) (0xc0005cf720) Stream removed, broadcasting: 1\nI0420 00:32:03.748101 2137 log.go:172] (0xc000a3c000) (0xc000438b40) Stream removed, broadcasting: 3\nI0420 00:32:03.748145 2137 log.go:172] (0xc000a3c000) (0xc000a5e000) Stream removed, broadcasting: 5\n" Apr 20 00:32:03.754: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 20 00:32:03.754: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 20 00:32:03.758: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 20 00:32:13.765: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 20 00:32:13.765: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 20 00:32:13.765: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 20 00:32:13.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1100 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 20 00:32:13.993: INFO: stderr: "I0420 00:32:13.899413 2159 log.go:172] (0xc0009d14a0) (0xc0009ca820) Create stream\nI0420 00:32:13.899485 2159 log.go:172] (0xc0009d14a0) (0xc0009ca820) Stream added, broadcasting: 1\nI0420 00:32:13.905011 2159 log.go:172] (0xc0009d14a0) Reply frame received for 1\nI0420 00:32:13.905054 2159 log.go:172] (0xc0009d14a0) (0xc0005415e0) Create stream\nI0420 00:32:13.905066 2159 log.go:172] (0xc0009d14a0) (0xc0005415e0) Stream added, broadcasting: 3\nI0420 00:32:13.906354 2159 log.go:172] (0xc0009d14a0) Reply frame received for 3\nI0420 00:32:13.906405 2159 log.go:172] (0xc0009d14a0) (0xc00028aa00) Create stream\nI0420 00:32:13.906420 2159 log.go:172] (0xc0009d14a0) (0xc00028aa00) Stream added, broadcasting: 5\nI0420 00:32:13.907399 2159 log.go:172] (0xc0009d14a0) Reply frame received for 5\nI0420 00:32:13.985798 2159 log.go:172] (0xc0009d14a0) Data frame received for 5\nI0420 00:32:13.985841 2159 log.go:172] (0xc00028aa00) (5) Data frame handling\nI0420 00:32:13.985862 2159 log.go:172] (0xc00028aa00) (5) Data frame sent\nI0420 00:32:13.985876 2159 log.go:172] (0xc0009d14a0) Data frame received for 5\nI0420 00:32:13.985887 2159 log.go:172] (0xc00028aa00) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0420 00:32:13.985918 2159 log.go:172] (0xc0009d14a0) Data frame received for 3\nI0420 00:32:13.985932 2159 log.go:172] (0xc0005415e0) (3) Data frame handling\nI0420 00:32:13.985949 2159 log.go:172] (0xc0005415e0) (3) Data frame sent\nI0420 00:32:13.985963 2159 log.go:172] (0xc0009d14a0) Data frame received for 3\nI0420 00:32:13.985975 2159 log.go:172] (0xc0005415e0) (3) Data frame handling\nI0420 00:32:13.987616 2159 log.go:172] (0xc0009d14a0) Data frame received for 1\nI0420 00:32:13.987646 2159 log.go:172] (0xc0009ca820) (1) Data frame handling\nI0420 00:32:13.987664 2159 log.go:172] (0xc0009ca820) (1) Data frame sent\nI0420 00:32:13.987679 2159 log.go:172] (0xc0009d14a0) (0xc0009ca820) Stream removed, broadcasting: 1\nI0420 00:32:13.987702 2159 log.go:172] (0xc0009d14a0) Go away received\nI0420 00:32:13.988004 2159 log.go:172] (0xc0009d14a0) (0xc0009ca820) Stream removed, broadcasting: 1\nI0420 00:32:13.988024 2159 log.go:172] (0xc0009d14a0) (0xc0005415e0) Stream removed, broadcasting: 3\nI0420 00:32:13.988034 2159 log.go:172] (0xc0009d14a0) (0xc00028aa00) Stream removed, broadcasting: 5\n" Apr 20 00:32:13.993: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 20 00:32:13.993: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 20 00:32:13.993: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1100 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 20 00:32:14.232: INFO: stderr: "I0420 00:32:14.124729 2180 log.go:172] (0xc00077ca50) (0xc0005e15e0) Create stream\nI0420 00:32:14.124798 2180 log.go:172] (0xc00077ca50) (0xc0005e15e0) Stream added, broadcasting: 1\nI0420 00:32:14.128026 2180 log.go:172] (0xc00077ca50) Reply frame received for 1\nI0420 00:32:14.128054 2180 log.go:172] (0xc00077ca50) (0xc0005c4000) Create stream\nI0420 00:32:14.128062 2180 log.go:172] (0xc00077ca50) (0xc0005c4000) Stream added, broadcasting: 3\nI0420 00:32:14.129026 2180 log.go:172] (0xc00077ca50) Reply frame received for 3\nI0420 00:32:14.129082 2180 log.go:172] (0xc00077ca50) (0xc0008cca00) Create stream\nI0420 00:32:14.129255 2180 log.go:172] (0xc00077ca50) (0xc0008cca00) Stream added, broadcasting: 5\nI0420 00:32:14.130382 2180 log.go:172] (0xc00077ca50) Reply frame received for 5\nI0420 00:32:14.197471 2180 log.go:172] (0xc00077ca50) Data frame received for 5\nI0420 00:32:14.197511 2180 log.go:172] (0xc0008cca00) (5) Data frame handling\nI0420 00:32:14.197529 2180 log.go:172] (0xc0008cca00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0420 00:32:14.224749 2180 log.go:172] (0xc00077ca50) Data frame received for 3\nI0420 00:32:14.224780 2180 log.go:172] (0xc0005c4000) (3) Data frame handling\nI0420 00:32:14.224820 2180 log.go:172] (0xc0005c4000) (3) Data frame sent\nI0420 00:32:14.225075 2180 log.go:172] (0xc00077ca50) Data frame received for 3\nI0420 00:32:14.225253 2180 log.go:172] (0xc0005c4000) (3) Data frame handling\nI0420 00:32:14.225294 2180 log.go:172] (0xc00077ca50) Data frame received for 5\nI0420 00:32:14.225308 2180 log.go:172] (0xc0008cca00) (5) Data frame handling\nI0420 00:32:14.226646 2180 log.go:172] (0xc00077ca50) Data frame received for 1\nI0420 00:32:14.226665 2180 log.go:172] (0xc0005e15e0) (1) Data frame handling\nI0420 00:32:14.226682 2180 log.go:172] (0xc0005e15e0) (1) Data frame sent\nI0420 00:32:14.226694 2180 log.go:172] (0xc00077ca50) (0xc0005e15e0) Stream removed, broadcasting: 1\nI0420 00:32:14.226707 2180 log.go:172] (0xc00077ca50) Go away received\nI0420 00:32:14.227134 2180 log.go:172] (0xc00077ca50) (0xc0005e15e0) Stream removed, broadcasting: 1\nI0420 00:32:14.227166 2180 log.go:172] (0xc00077ca50) (0xc0005c4000) Stream removed, broadcasting: 3\nI0420 00:32:14.227190 2180 log.go:172] (0xc00077ca50) (0xc0008cca00) Stream removed, broadcasting: 5\n" Apr 20 00:32:14.233: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 20 00:32:14.233: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 20 00:32:14.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1100 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 20 00:32:14.470: INFO: stderr: "I0420 00:32:14.370499 2203 log.go:172] (0xc000ba2630) (0xc0009f0000) Create stream\nI0420 00:32:14.370572 2203 log.go:172] (0xc000ba2630) (0xc0009f0000) Stream added, broadcasting: 1\nI0420 00:32:14.373933 2203 log.go:172] (0xc000ba2630) Reply frame received for 1\nI0420 00:32:14.373985 2203 log.go:172] (0xc000ba2630) (0xc0006ed220) Create stream\nI0420 00:32:14.373997 2203 log.go:172] (0xc000ba2630) (0xc0006ed220) Stream added, broadcasting: 3\nI0420 00:32:14.375239 2203 log.go:172] (0xc000ba2630) Reply frame received for 3\nI0420 00:32:14.375267 2203 log.go:172] (0xc000ba2630) (0xc000a2a000) Create stream\nI0420 00:32:14.375278 2203 log.go:172] (0xc000ba2630) (0xc000a2a000) Stream added, broadcasting: 5\nI0420 00:32:14.376425 2203 log.go:172] (0xc000ba2630) Reply frame received for 5\nI0420 00:32:14.437797 2203 log.go:172] (0xc000ba2630) Data frame received for 5\nI0420 00:32:14.437825 2203 log.go:172] (0xc000a2a000) (5) Data frame handling\nI0420 00:32:14.437852 2203 log.go:172] (0xc000a2a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0420 00:32:14.464745 2203 log.go:172] (0xc000ba2630) Data frame received for 3\nI0420 00:32:14.464759 2203 log.go:172] (0xc0006ed220) (3) Data frame handling\nI0420 00:32:14.464777 2203 log.go:172] (0xc0006ed220) (3) Data frame sent\nI0420 00:32:14.464841 2203 log.go:172] (0xc000ba2630) Data frame received for 5\nI0420 00:32:14.464858 2203 log.go:172] (0xc000a2a000) (5) Data frame handling\nI0420 00:32:14.465070 2203 log.go:172] (0xc000ba2630) Data frame received for 3\nI0420 00:32:14.465095 2203 log.go:172] (0xc0006ed220) (3) Data frame handling\nI0420 00:32:14.467033 2203 log.go:172] (0xc000ba2630) Data frame received for 1\nI0420 00:32:14.467053 2203 log.go:172] (0xc0009f0000) (1) Data frame handling\nI0420 00:32:14.467066 2203 log.go:172] (0xc0009f0000) (1) Data frame sent\nI0420 00:32:14.467079 2203 log.go:172] (0xc000ba2630) (0xc0009f0000) Stream removed, broadcasting: 1\nI0420 00:32:14.467182 2203 log.go:172] (0xc000ba2630) Go away received\nI0420 00:32:14.467351 2203 log.go:172] (0xc000ba2630) (0xc0009f0000) Stream removed, broadcasting: 1\nI0420 00:32:14.467364 2203 log.go:172] (0xc000ba2630) (0xc0006ed220) Stream removed, broadcasting: 3\nI0420 00:32:14.467371 2203 log.go:172] (0xc000ba2630) (0xc000a2a000) Stream removed, broadcasting: 5\n" Apr 20 00:32:14.470: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 20 00:32:14.470: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 20 00:32:14.471: INFO: Waiting for statefulset status.replicas updated to 0 Apr 20 00:32:14.474: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 20 00:32:24.480: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 20 00:32:24.480: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 20 00:32:24.480: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 20 00:32:24.495: INFO: POD NODE PHASE GRACE CONDITIONS Apr 20 00:32:24.495: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC }] Apr 20 00:32:24.495: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:24.495: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:24.495: INFO: Apr 20 00:32:24.495: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 20 00:32:25.592: INFO: POD NODE PHASE GRACE CONDITIONS Apr 20 00:32:25.592: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC }] Apr 20 00:32:25.592: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:25.592: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:25.592: INFO: Apr 20 00:32:25.592: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 20 00:32:26.598: INFO: POD NODE PHASE GRACE CONDITIONS Apr 20 00:32:26.598: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC }] Apr 20 00:32:26.598: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:26.598: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:26.598: INFO: Apr 20 00:32:26.598: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 20 00:32:27.603: INFO: POD NODE PHASE GRACE CONDITIONS Apr 20 00:32:27.603: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC }] Apr 20 00:32:27.603: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:27.603: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:27.603: INFO: Apr 20 00:32:27.603: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 20 00:32:28.609: INFO: POD NODE PHASE GRACE CONDITIONS Apr 20 00:32:28.609: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC }] Apr 20 00:32:28.609: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:28.609: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:28.609: INFO: Apr 20 00:32:28.609: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 20 00:32:29.614: INFO: POD NODE PHASE GRACE CONDITIONS Apr 20 00:32:29.614: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC }] Apr 20 00:32:29.614: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:29.614: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:29.614: INFO: Apr 20 00:32:29.614: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 20 00:32:30.619: INFO: POD NODE PHASE GRACE CONDITIONS Apr 20 00:32:30.619: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC }] Apr 20 00:32:30.619: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:30.619: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:30.620: INFO: Apr 20 00:32:30.620: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 20 00:32:31.624: INFO: POD NODE PHASE GRACE CONDITIONS Apr 20 00:32:31.624: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC }] Apr 20 00:32:31.624: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:31.625: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:31.625: INFO: Apr 20 00:32:31.625: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 20 00:32:32.629: INFO: POD NODE PHASE GRACE CONDITIONS Apr 20 00:32:32.629: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:32 +0000 UTC }] Apr 20 00:32:32.630: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:32.630: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:32:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-20 00:31:53 +0000 UTC }] Apr 20 00:32:32.630: INFO: Apr 20 00:32:32.630: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 20 00:32:33.634: INFO: Verifying statefulset ss doesn't scale past 0 for another 855.574958ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1100 Apr 20 00:32:34.638: INFO: Scaling statefulset ss to 0 Apr 20 00:32:34.648: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 20 00:32:34.651: INFO: Deleting all statefulset in ns statefulset-1100 Apr 20 00:32:34.653: INFO: Scaling statefulset ss to 0 Apr 20 00:32:34.662: INFO: Waiting for statefulset status.replicas updated to 0 Apr 20 00:32:34.664: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:32:34.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1100" for this suite. • [SLOW TEST:62.726 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":171,"skipped":3154,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:32:34.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 20 00:32:34.754: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Apr 20 00:32:35.418: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 20 00:32:37.565: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939555, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939555, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939555, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939555, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 00:32:40.193: INFO: Waited 618.269521ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:32:40.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7124" for this suite. • [SLOW TEST:6.250 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":172,"skipped":3190,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:32:40.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 20 00:32:40.997: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 20 00:32:41.181: INFO: Waiting for terminating namespaces to be deleted... Apr 20 00:32:41.183: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 20 00:32:41.201: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:32:41.201: INFO: Container kube-proxy ready: true, restart count 0 Apr 20 00:32:41.201: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:32:41.201: INFO: Container kindnet-cni ready: true, restart count 0 Apr 20 00:32:41.201: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 20 00:32:41.218: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:32:41.218: INFO: Container kindnet-cni ready: true, restart count 0 Apr 20 00:32:41.218: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:32:41.218: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c9d7e4de-081c-4be2-abf1-d1ecd55e5652 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-c9d7e4de-081c-4be2-abf1-d1ecd55e5652 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-c9d7e4de-081c-4be2-abf1-d1ecd55e5652 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:32:57.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6821" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.484 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":173,"skipped":3199,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:32:57.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-gk2x STEP: Creating a pod to test atomic-volume-subpath Apr 20 00:32:57.544: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gk2x" in namespace "subpath-6827" to be "Succeeded or Failed" Apr 20 00:32:57.546: INFO: Pod "pod-subpath-test-configmap-gk2x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.773597ms Apr 20 00:32:59.550: INFO: Pod "pod-subpath-test-configmap-gk2x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006521915s Apr 20 00:33:01.554: INFO: Pod "pod-subpath-test-configmap-gk2x": Phase="Running", Reason="", readiness=true. Elapsed: 4.010534061s Apr 20 00:33:03.557: INFO: Pod "pod-subpath-test-configmap-gk2x": Phase="Running", Reason="", readiness=true. Elapsed: 6.013480362s Apr 20 00:33:05.562: INFO: Pod "pod-subpath-test-configmap-gk2x": Phase="Running", Reason="", readiness=true. Elapsed: 8.018197833s Apr 20 00:33:07.567: INFO: Pod "pod-subpath-test-configmap-gk2x": Phase="Running", Reason="", readiness=true. Elapsed: 10.023075948s Apr 20 00:33:09.571: INFO: Pod "pod-subpath-test-configmap-gk2x": Phase="Running", Reason="", readiness=true. Elapsed: 12.027398357s Apr 20 00:33:11.575: INFO: Pod "pod-subpath-test-configmap-gk2x": Phase="Running", Reason="", readiness=true. Elapsed: 14.031666403s Apr 20 00:33:13.579: INFO: Pod "pod-subpath-test-configmap-gk2x": Phase="Running", Reason="", readiness=true. Elapsed: 16.035814672s Apr 20 00:33:15.583: INFO: Pod "pod-subpath-test-configmap-gk2x": Phase="Running", Reason="", readiness=true. Elapsed: 18.039928312s Apr 20 00:33:17.588: INFO: Pod "pod-subpath-test-configmap-gk2x": Phase="Running", Reason="", readiness=true. Elapsed: 20.044185964s Apr 20 00:33:19.592: INFO: Pod "pod-subpath-test-configmap-gk2x": Phase="Running", Reason="", readiness=true. Elapsed: 22.048340854s Apr 20 00:33:21.597: INFO: Pod "pod-subpath-test-configmap-gk2x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053456906s STEP: Saw pod success Apr 20 00:33:21.597: INFO: Pod "pod-subpath-test-configmap-gk2x" satisfied condition "Succeeded or Failed" Apr 20 00:33:21.600: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-gk2x container test-container-subpath-configmap-gk2x: STEP: delete the pod Apr 20 00:33:21.631: INFO: Waiting for pod pod-subpath-test-configmap-gk2x to disappear Apr 20 00:33:21.694: INFO: Pod pod-subpath-test-configmap-gk2x no longer exists STEP: Deleting pod pod-subpath-test-configmap-gk2x Apr 20 00:33:21.694: INFO: Deleting pod "pod-subpath-test-configmap-gk2x" in namespace "subpath-6827" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:33:21.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6827" for this suite. • [SLOW TEST:24.283 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":174,"skipped":3203,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:33:21.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3987.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3987.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3987.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3987.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3987.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3987.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3987.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 32.64.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.64.32_udp@PTR;check="$$(dig +tcp +noall +answer +search 32.64.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.64.32_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3987.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3987.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3987.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3987.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3987.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3987.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3987.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 32.64.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.64.32_udp@PTR;check="$$(dig +tcp +noall +answer +search 32.64.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.64.32_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 20 00:33:27.894: INFO: Unable to read wheezy_udp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:27.898: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:27.902: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:27.905: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:27.924: INFO: Unable to read jessie_udp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:27.927: INFO: Unable to read jessie_tcp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:27.930: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:27.937: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:27.954: INFO: Lookups using dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97 failed for: [wheezy_udp@dns-test-service.dns-3987.svc.cluster.local wheezy_tcp@dns-test-service.dns-3987.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local jessie_udp@dns-test-service.dns-3987.svc.cluster.local jessie_tcp@dns-test-service.dns-3987.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local] Apr 20 00:33:32.959: INFO: Unable to read wheezy_udp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:32.963: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:32.966: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:32.969: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:32.991: INFO: Unable to read jessie_udp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:32.993: INFO: Unable to read jessie_tcp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:32.996: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:32.999: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:33.018: INFO: Lookups using dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97 failed for: [wheezy_udp@dns-test-service.dns-3987.svc.cluster.local wheezy_tcp@dns-test-service.dns-3987.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local jessie_udp@dns-test-service.dns-3987.svc.cluster.local jessie_tcp@dns-test-service.dns-3987.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local] Apr 20 00:33:37.959: INFO: Unable to read wheezy_udp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:37.963: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:37.966: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:37.991: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:38.012: INFO: Unable to read jessie_udp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:38.014: INFO: Unable to read jessie_tcp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:38.016: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:38.018: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:38.063: INFO: Lookups using dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97 failed for: [wheezy_udp@dns-test-service.dns-3987.svc.cluster.local wheezy_tcp@dns-test-service.dns-3987.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local jessie_udp@dns-test-service.dns-3987.svc.cluster.local jessie_tcp@dns-test-service.dns-3987.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local] Apr 20 00:33:42.959: INFO: Unable to read wheezy_udp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:42.963: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:42.966: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:42.969: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:42.989: INFO: Unable to read jessie_udp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:42.992: INFO: Unable to read jessie_tcp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:42.995: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:42.998: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:43.018: INFO: Lookups using dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97 failed for: [wheezy_udp@dns-test-service.dns-3987.svc.cluster.local wheezy_tcp@dns-test-service.dns-3987.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local jessie_udp@dns-test-service.dns-3987.svc.cluster.local jessie_tcp@dns-test-service.dns-3987.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local] Apr 20 00:33:47.958: INFO: Unable to read wheezy_udp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:47.961: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:47.964: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:47.966: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:47.987: INFO: Unable to read jessie_udp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:47.990: INFO: Unable to read jessie_tcp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:47.993: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:47.996: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:48.064: INFO: Lookups using dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97 failed for: [wheezy_udp@dns-test-service.dns-3987.svc.cluster.local wheezy_tcp@dns-test-service.dns-3987.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local jessie_udp@dns-test-service.dns-3987.svc.cluster.local jessie_tcp@dns-test-service.dns-3987.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local] Apr 20 00:33:52.958: INFO: Unable to read wheezy_udp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:52.961: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:52.963: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:52.966: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:52.986: INFO: Unable to read jessie_udp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:52.989: INFO: Unable to read jessie_tcp@dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:52.992: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:52.995: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local from pod dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97: the server could not find the requested resource (get pods dns-test-089a37b9-557f-4f70-997c-d98217d5ae97) Apr 20 00:33:53.016: INFO: Lookups using dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97 failed for: [wheezy_udp@dns-test-service.dns-3987.svc.cluster.local wheezy_tcp@dns-test-service.dns-3987.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local jessie_udp@dns-test-service.dns-3987.svc.cluster.local jessie_tcp@dns-test-service.dns-3987.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3987.svc.cluster.local] Apr 20 00:33:58.030: INFO: DNS probes using dns-3987/dns-test-089a37b9-557f-4f70-997c-d98217d5ae97 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:33:58.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3987" for this suite. • [SLOW TEST:36.854 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":175,"skipped":3216,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:33:58.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:33:58.667: INFO: Creating deployment "test-recreate-deployment" Apr 20 00:33:58.671: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 20 00:33:58.707: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 20 00:34:00.956: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 20 00:34:00.998: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939638, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939638, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939638, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939638, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 00:34:03.002: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 20 00:34:03.011: INFO: Updating deployment test-recreate-deployment Apr 20 00:34:03.011: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 20 00:34:03.237: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2267 /apis/apps/v1/namespaces/deployment-2267/deployments/test-recreate-deployment 7cf60765-34a3-44dc-919d-b25982f406d2 9467700 2 2020-04-20 00:33:58 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b006a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-20 00:34:03 +0000 UTC,LastTransitionTime:2020-04-20 00:34:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-20 00:34:03 +0000 UTC,LastTransitionTime:2020-04-20 00:33:58 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 20 00:34:03.441: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-2267 /apis/apps/v1/namespaces/deployment-2267/replicasets/test-recreate-deployment-5f94c574ff fdf004ba-0a97-4b42-8dc1-1fd090dc66a9 9467697 1 2020-04-20 00:34:03 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 7cf60765-34a3-44dc-919d-b25982f406d2 0xc002b00d47 0xc002b00d48}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b00e08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 20 00:34:03.441: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 20 00:34:03.441: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-2267 /apis/apps/v1/namespaces/deployment-2267/replicasets/test-recreate-deployment-846c7dd955 b6c8d6c2-b651-4e2b-924c-aabea4981490 9467689 2 2020-04-20 00:33:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 7cf60765-34a3-44dc-919d-b25982f406d2 0xc002b00ee7 0xc002b00ee8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b01028 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 20 00:34:03.446: INFO: Pod "test-recreate-deployment-5f94c574ff-2tkgd" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-2tkgd test-recreate-deployment-5f94c574ff- deployment-2267 /api/v1/namespaces/deployment-2267/pods/test-recreate-deployment-5f94c574ff-2tkgd 9fdec51a-c72c-41a2-9191-c1f7cfa8fb8d 9467701 0 2020-04-20 00:34:03 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff fdf004ba-0a97-4b42-8dc1-1fd090dc66a9 0xc0026409f7 0xc0026409f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dc56j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dc56j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dc56j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:34:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:34:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:34:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:34:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-20 00:34:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:34:03.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2267" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":176,"skipped":3221,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:34:03.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 20 00:34:04.132: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 20 00:34:06.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939644, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939644, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939644, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939644, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 20 00:34:09.253: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:34:09.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4536-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:34:10.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-356" for this suite. STEP: Destroying namespace "webhook-356-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.065 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":177,"skipped":3223,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:34:10.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 20 00:34:15.190: INFO: Successfully updated pod "annotationupdate5b52845e-1534-4acd-b429-b2e19bda69c6" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:34:17.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1324" for this suite. • [SLOW TEST:6.712 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":3229,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:34:17.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-8161 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 20 00:34:17.270: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 20 00:34:17.335: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 20 00:34:19.340: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 20 00:34:21.339: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:34:23.340: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:34:25.339: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:34:27.339: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:34:29.339: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:34:31.340: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 20 00:34:31.347: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 20 00:34:33.351: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 20 00:34:35.351: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 20 00:34:37.351: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 20 00:34:41.404: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.233:8080/dial?request=hostname&protocol=http&host=10.244.2.232&port=8080&tries=1'] Namespace:pod-network-test-8161 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:34:41.404: INFO: >>> kubeConfig: /root/.kube/config I0420 00:34:41.440412 8 log.go:172] (0xc002aee370) (0xc001e91e00) Create stream I0420 00:34:41.440445 8 log.go:172] (0xc002aee370) (0xc001e91e00) Stream added, broadcasting: 1 I0420 00:34:41.442544 8 log.go:172] (0xc002aee370) Reply frame received for 1 I0420 00:34:41.442578 8 log.go:172] (0xc002aee370) (0xc001a6c000) Create stream I0420 00:34:41.442590 8 log.go:172] (0xc002aee370) (0xc001a6c000) Stream added, broadcasting: 3 I0420 00:34:41.443816 8 log.go:172] (0xc002aee370) Reply frame received for 3 I0420 00:34:41.443865 8 log.go:172] (0xc002aee370) (0xc001e91ea0) Create stream I0420 00:34:41.443891 8 log.go:172] (0xc002aee370) (0xc001e91ea0) Stream added, broadcasting: 5 I0420 00:34:41.444816 8 log.go:172] (0xc002aee370) Reply frame received for 5 I0420 00:34:41.545523 8 log.go:172] (0xc002aee370) Data frame received for 3 I0420 00:34:41.545560 8 log.go:172] (0xc001a6c000) (3) Data frame handling I0420 00:34:41.545588 8 log.go:172] (0xc001a6c000) (3) Data frame sent I0420 00:34:41.546275 8 log.go:172] (0xc002aee370) Data frame received for 5 I0420 00:34:41.546315 8 log.go:172] (0xc001e91ea0) (5) Data frame handling I0420 00:34:41.546343 8 log.go:172] (0xc002aee370) Data frame received for 3 I0420 00:34:41.546357 8 log.go:172] (0xc001a6c000) (3) Data frame handling I0420 00:34:41.547982 8 log.go:172] (0xc002aee370) Data frame received for 1 I0420 00:34:41.547994 8 log.go:172] (0xc001e91e00) (1) Data frame handling I0420 00:34:41.548007 8 log.go:172] (0xc001e91e00) (1) Data frame sent I0420 00:34:41.548026 8 log.go:172] (0xc002aee370) (0xc001e91e00) Stream removed, broadcasting: 1 I0420 00:34:41.548089 8 log.go:172] (0xc002aee370) (0xc001e91e00) Stream removed, broadcasting: 1 I0420 00:34:41.548106 8 log.go:172] (0xc002aee370) (0xc001a6c000) Stream removed, broadcasting: 3 I0420 00:34:41.548186 8 log.go:172] (0xc002aee370) Go away received I0420 00:34:41.548290 8 log.go:172] (0xc002aee370) (0xc001e91ea0) Stream removed, broadcasting: 5 Apr 20 00:34:41.548: INFO: Waiting for responses: map[] Apr 20 00:34:41.551: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.233:8080/dial?request=hostname&protocol=http&host=10.244.1.135&port=8080&tries=1'] Namespace:pod-network-test-8161 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:34:41.551: INFO: >>> kubeConfig: /root/.kube/config I0420 00:34:41.582417 8 log.go:172] (0xc002aeea50) (0xc000c0a960) Create stream I0420 00:34:41.582441 8 log.go:172] (0xc002aeea50) (0xc000c0a960) Stream added, broadcasting: 1 I0420 00:34:41.583704 8 log.go:172] (0xc002aeea50) Reply frame received for 1 I0420 00:34:41.583734 8 log.go:172] (0xc002aeea50) (0xc000c0ab40) Create stream I0420 00:34:41.583745 8 log.go:172] (0xc002aeea50) (0xc000c0ab40) Stream added, broadcasting: 3 I0420 00:34:41.584458 8 log.go:172] (0xc002aeea50) Reply frame received for 3 I0420 00:34:41.584503 8 log.go:172] (0xc002aeea50) (0xc001a6c1e0) Create stream I0420 00:34:41.584520 8 log.go:172] (0xc002aeea50) (0xc001a6c1e0) Stream added, broadcasting: 5 I0420 00:34:41.585259 8 log.go:172] (0xc002aeea50) Reply frame received for 5 I0420 00:34:41.648837 8 log.go:172] (0xc002aeea50) Data frame received for 3 I0420 00:34:41.648867 8 log.go:172] (0xc000c0ab40) (3) Data frame handling I0420 00:34:41.648896 8 log.go:172] (0xc000c0ab40) (3) Data frame sent I0420 00:34:41.649715 8 log.go:172] (0xc002aeea50) Data frame received for 5 I0420 00:34:41.649744 8 log.go:172] (0xc001a6c1e0) (5) Data frame handling I0420 00:34:41.649974 8 log.go:172] (0xc002aeea50) Data frame received for 3 I0420 00:34:41.649993 8 log.go:172] (0xc000c0ab40) (3) Data frame handling I0420 00:34:41.651349 8 log.go:172] (0xc002aeea50) Data frame received for 1 I0420 00:34:41.651383 8 log.go:172] (0xc000c0a960) (1) Data frame handling I0420 00:34:41.651409 8 log.go:172] (0xc000c0a960) (1) Data frame sent I0420 00:34:41.651441 8 log.go:172] (0xc002aeea50) (0xc000c0a960) Stream removed, broadcasting: 1 I0420 00:34:41.651471 8 log.go:172] (0xc002aeea50) Go away received I0420 00:34:41.651534 8 log.go:172] (0xc002aeea50) (0xc000c0a960) Stream removed, broadcasting: 1 I0420 00:34:41.651562 8 log.go:172] (0xc002aeea50) (0xc000c0ab40) Stream removed, broadcasting: 3 I0420 00:34:41.651574 8 log.go:172] (0xc002aeea50) (0xc001a6c1e0) Stream removed, broadcasting: 5 Apr 20 00:34:41.651: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:34:41.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8161" for this suite. • [SLOW TEST:24.428 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":3238,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:34:41.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-9732712b-dcc9-44e6-ba1b-d084de3b8c6e STEP: Creating a pod to test consume secrets Apr 20 00:34:41.769: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eb021e23-a836-4d4b-9be0-bece51a3314c" in namespace "projected-6133" to be "Succeeded or Failed" Apr 20 00:34:41.783: INFO: Pod "pod-projected-secrets-eb021e23-a836-4d4b-9be0-bece51a3314c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.762975ms Apr 20 00:34:44.227: INFO: Pod "pod-projected-secrets-eb021e23-a836-4d4b-9be0-bece51a3314c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.457400883s Apr 20 00:34:46.234: INFO: Pod "pod-projected-secrets-eb021e23-a836-4d4b-9be0-bece51a3314c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.464060992s STEP: Saw pod success Apr 20 00:34:46.234: INFO: Pod "pod-projected-secrets-eb021e23-a836-4d4b-9be0-bece51a3314c" satisfied condition "Succeeded or Failed" Apr 20 00:34:46.236: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-eb021e23-a836-4d4b-9be0-bece51a3314c container projected-secret-volume-test: STEP: delete the pod Apr 20 00:34:46.286: INFO: Waiting for pod pod-projected-secrets-eb021e23-a836-4d4b-9be0-bece51a3314c to disappear Apr 20 00:34:46.389: INFO: Pod pod-projected-secrets-eb021e23-a836-4d4b-9be0-bece51a3314c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:34:46.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6133" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":3255,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:34:46.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9582 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 20 00:34:46.667: INFO: Found 0 stateful pods, waiting for 3 Apr 20 00:34:56.689: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 20 00:34:56.689: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 20 00:34:56.689: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 20 00:35:06.671: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 20 00:35:06.672: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 20 00:35:06.672: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 20 00:35:06.697: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 20 00:35:16.747: INFO: Updating stateful set ss2 Apr 20 00:35:16.780: INFO: Waiting for Pod statefulset-9582/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 20 00:35:27.247: INFO: Found 2 stateful pods, waiting for 3 Apr 20 00:35:37.251: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 20 00:35:37.251: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 20 00:35:37.251: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 20 00:35:37.276: INFO: Updating stateful set ss2 Apr 20 00:35:37.288: INFO: Waiting for Pod statefulset-9582/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 20 00:35:47.313: INFO: Updating stateful set ss2 Apr 20 00:35:47.348: INFO: Waiting for StatefulSet statefulset-9582/ss2 to complete update Apr 20 00:35:47.348: INFO: Waiting for Pod statefulset-9582/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 20 00:35:57.354: INFO: Deleting all statefulset in ns statefulset-9582 Apr 20 00:35:57.356: INFO: Scaling statefulset ss2 to 0 Apr 20 00:36:07.379: INFO: Waiting for statefulset status.replicas updated to 0 Apr 20 00:36:07.383: INFO: Waiting for stateful set status.replicas to become 0, currently 1 Apr 20 00:36:17.387: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:36:17.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9582" for this suite. • [SLOW TEST:91.007 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":181,"skipped":3258,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:36:17.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-7b2422dd-38a6-4c5c-a433-3b5a6a1b58f7 STEP: Creating a pod to test consume secrets Apr 20 00:36:17.501: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0060cd76-e160-4554-9b97-d8b832791096" in namespace "projected-9183" to be "Succeeded or Failed" Apr 20 00:36:17.510: INFO: Pod "pod-projected-secrets-0060cd76-e160-4554-9b97-d8b832791096": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545517ms Apr 20 00:36:19.558: INFO: Pod "pod-projected-secrets-0060cd76-e160-4554-9b97-d8b832791096": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057158138s Apr 20 00:36:21.630: INFO: Pod "pod-projected-secrets-0060cd76-e160-4554-9b97-d8b832791096": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129066136s STEP: Saw pod success Apr 20 00:36:21.630: INFO: Pod "pod-projected-secrets-0060cd76-e160-4554-9b97-d8b832791096" satisfied condition "Succeeded or Failed" Apr 20 00:36:21.633: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-0060cd76-e160-4554-9b97-d8b832791096 container projected-secret-volume-test: STEP: delete the pod Apr 20 00:36:21.710: INFO: Waiting for pod pod-projected-secrets-0060cd76-e160-4554-9b97-d8b832791096 to disappear Apr 20 00:36:21.786: INFO: Pod pod-projected-secrets-0060cd76-e160-4554-9b97-d8b832791096 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:36:21.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9183" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3271,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:36:21.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Apr 20 00:36:21.867: INFO: Waiting up to 5m0s for pod "var-expansion-86c4be53-cb4b-4043-9d97-20261c6a6b17" in namespace "var-expansion-4365" to be "Succeeded or Failed" Apr 20 00:36:21.885: INFO: Pod "var-expansion-86c4be53-cb4b-4043-9d97-20261c6a6b17": Phase="Pending", Reason="", readiness=false. Elapsed: 17.359011ms Apr 20 00:36:23.889: INFO: Pod "var-expansion-86c4be53-cb4b-4043-9d97-20261c6a6b17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022126313s Apr 20 00:36:25.893: INFO: Pod "var-expansion-86c4be53-cb4b-4043-9d97-20261c6a6b17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025716328s STEP: Saw pod success Apr 20 00:36:25.893: INFO: Pod "var-expansion-86c4be53-cb4b-4043-9d97-20261c6a6b17" satisfied condition "Succeeded or Failed" Apr 20 00:36:25.895: INFO: Trying to get logs from node latest-worker pod var-expansion-86c4be53-cb4b-4043-9d97-20261c6a6b17 container dapi-container: STEP: delete the pod Apr 20 00:36:25.915: INFO: Waiting for pod var-expansion-86c4be53-cb4b-4043-9d97-20261c6a6b17 to disappear Apr 20 00:36:25.919: INFO: Pod var-expansion-86c4be53-cb4b-4043-9d97-20261c6a6b17 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:36:25.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4365" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3275,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:36:25.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 20 00:36:32.058: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3555 PodName:pod-sharedvolume-fb8e0752-288f-4e24-b6f0-3a0ebaa7269e ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:36:32.058: INFO: >>> kubeConfig: /root/.kube/config I0420 00:36:32.093529 8 log.go:172] (0xc002edc2c0) (0xc001aa3f40) Create stream I0420 00:36:32.093562 8 log.go:172] (0xc002edc2c0) (0xc001aa3f40) Stream added, broadcasting: 1 I0420 00:36:32.095659 8 log.go:172] (0xc002edc2c0) Reply frame received for 1 I0420 00:36:32.095705 8 log.go:172] (0xc002edc2c0) (0xc0022ec0a0) Create stream I0420 00:36:32.095729 8 log.go:172] (0xc002edc2c0) (0xc0022ec0a0) Stream added, broadcasting: 3 I0420 00:36:32.096738 8 log.go:172] (0xc002edc2c0) Reply frame received for 3 I0420 00:36:32.096774 8 log.go:172] (0xc002edc2c0) (0xc002204780) Create stream I0420 00:36:32.096784 8 log.go:172] (0xc002edc2c0) (0xc002204780) Stream added, broadcasting: 5 I0420 00:36:32.097934 8 log.go:172] (0xc002edc2c0) Reply frame received for 5 I0420 00:36:32.185604 8 log.go:172] (0xc002edc2c0) Data frame received for 5 I0420 00:36:32.185633 8 log.go:172] (0xc002204780) (5) Data frame handling I0420 00:36:32.185657 8 log.go:172] (0xc002edc2c0) Data frame received for 3 I0420 00:36:32.185673 8 log.go:172] (0xc0022ec0a0) (3) Data frame handling I0420 00:36:32.185689 8 log.go:172] (0xc0022ec0a0) (3) Data frame sent I0420 00:36:32.185703 8 log.go:172] (0xc002edc2c0) Data frame received for 3 I0420 00:36:32.185710 8 log.go:172] (0xc0022ec0a0) (3) Data frame handling I0420 00:36:32.187397 8 log.go:172] (0xc002edc2c0) Data frame received for 1 I0420 00:36:32.187434 8 log.go:172] (0xc001aa3f40) (1) Data frame handling I0420 00:36:32.187474 8 log.go:172] (0xc001aa3f40) (1) Data frame sent I0420 00:36:32.187494 8 log.go:172] (0xc002edc2c0) (0xc001aa3f40) Stream removed, broadcasting: 1 I0420 00:36:32.187584 8 log.go:172] (0xc002edc2c0) Go away received I0420 00:36:32.187711 8 log.go:172] (0xc002edc2c0) (0xc001aa3f40) Stream removed, broadcasting: 1 I0420 00:36:32.187757 8 log.go:172] (0xc002edc2c0) (0xc0022ec0a0) Stream removed, broadcasting: 3 I0420 00:36:32.187780 8 log.go:172] (0xc002edc2c0) (0xc002204780) Stream removed, broadcasting: 5 Apr 20 00:36:32.187: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:36:32.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3555" for this suite. • [SLOW TEST:6.266 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":184,"skipped":3280,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:36:32.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Apr 20 00:36:36.880: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2807 pod-service-account-009eb3b5-ee28-43d0-a1e3-cd21cf4f6363 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 20 00:36:37.116: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2807 pod-service-account-009eb3b5-ee28-43d0-a1e3-cd21cf4f6363 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 20 00:36:37.329: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2807 pod-service-account-009eb3b5-ee28-43d0-a1e3-cd21cf4f6363 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:36:37.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2807" for this suite. • [SLOW TEST:5.313 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":185,"skipped":3290,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:36:37.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 20 00:36:38.010: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 20 00:36:40.019: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939798, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939798, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939798, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939797, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 00:36:42.024: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939798, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939798, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939798, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939797, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 20 00:36:45.048: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:36:55.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8483" for this suite. STEP: Destroying namespace "webhook-8483-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.770 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":186,"skipped":3291,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:36:55.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 20 00:36:55.387: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7ac8930-fc18-458d-8a6a-525ab1fcfafe" in namespace "projected-9768" to be "Succeeded or Failed" Apr 20 00:36:55.415: INFO: Pod "downwardapi-volume-f7ac8930-fc18-458d-8a6a-525ab1fcfafe": Phase="Pending", Reason="", readiness=false. Elapsed: 28.406137ms Apr 20 00:36:57.419: INFO: Pod "downwardapi-volume-f7ac8930-fc18-458d-8a6a-525ab1fcfafe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031561786s Apr 20 00:36:59.422: INFO: Pod "downwardapi-volume-f7ac8930-fc18-458d-8a6a-525ab1fcfafe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035338789s STEP: Saw pod success Apr 20 00:36:59.422: INFO: Pod "downwardapi-volume-f7ac8930-fc18-458d-8a6a-525ab1fcfafe" satisfied condition "Succeeded or Failed" Apr 20 00:36:59.425: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f7ac8930-fc18-458d-8a6a-525ab1fcfafe container client-container: STEP: delete the pod Apr 20 00:36:59.460: INFO: Waiting for pod downwardapi-volume-f7ac8930-fc18-458d-8a6a-525ab1fcfafe to disappear Apr 20 00:36:59.511: INFO: Pod downwardapi-volume-f7ac8930-fc18-458d-8a6a-525ab1fcfafe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:36:59.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9768" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3297,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:36:59.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 20 00:36:59.834: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 20 00:37:01.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939819, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939819, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939819, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722939819, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 20 00:37:04.863: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:37:05.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-421" for this suite. STEP: Destroying namespace "webhook-421-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.607 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":188,"skipped":3297,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:37:05.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:37:05.197: INFO: (0) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 18.059167ms) Apr 20 00:37:05.201: INFO: (1) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.188492ms) Apr 20 00:37:05.218: INFO: (2) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 16.702721ms) Apr 20 00:37:05.221: INFO: (3) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.8875ms) Apr 20 00:37:05.224: INFO: (4) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.658407ms) Apr 20 00:37:05.227: INFO: (5) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.126302ms) Apr 20 00:37:05.230: INFO: (6) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.318802ms) Apr 20 00:37:05.234: INFO: (7) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.487935ms) Apr 20 00:37:05.237: INFO: (8) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.344603ms) Apr 20 00:37:05.240: INFO: (9) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.271966ms) Apr 20 00:37:05.244: INFO: (10) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.411684ms) Apr 20 00:37:05.247: INFO: (11) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.57058ms) Apr 20 00:37:05.251: INFO: (12) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.64478ms) Apr 20 00:37:05.255: INFO: (13) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.684434ms) Apr 20 00:37:05.258: INFO: (14) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.370656ms) Apr 20 00:37:05.262: INFO: (15) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.668174ms) Apr 20 00:37:05.265: INFO: (16) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.497503ms) Apr 20 00:37:05.269: INFO: (17) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.155624ms) Apr 20 00:37:05.272: INFO: (18) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.54956ms) Apr 20 00:37:05.276: INFO: (19) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.497471ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:37:05.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6875" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":189,"skipped":3297,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:37:05.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:37:11.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2708" for this suite. STEP: Destroying namespace "nsdeletetest-7831" for this suite. Apr 20 00:37:11.613: INFO: Namespace nsdeletetest-7831 was already deleted STEP: Destroying namespace "nsdeletetest-7602" for this suite. • [SLOW TEST:6.333 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":190,"skipped":3303,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:37:11.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:37:11.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 20 00:37:12.398: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-20T00:37:12Z generation:1 name:name1 resourceVersion:9469078 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:08b50cb3-7787-4eff-992a-250793e88bc0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 20 00:37:22.403: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-20T00:37:22Z generation:1 name:name2 resourceVersion:9469116 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f780b673-3062-4b39-b7d8-ba568aeee83c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 20 00:37:32.425: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-20T00:37:12Z generation:2 name:name1 resourceVersion:9469146 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:08b50cb3-7787-4eff-992a-250793e88bc0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 20 00:37:42.431: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-20T00:37:22Z generation:2 name:name2 resourceVersion:9469175 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f780b673-3062-4b39-b7d8-ba568aeee83c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 20 00:37:52.437: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-20T00:37:12Z generation:2 name:name1 resourceVersion:9469205 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:08b50cb3-7787-4eff-992a-250793e88bc0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 20 00:38:02.445: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-20T00:37:22Z generation:2 name:name2 resourceVersion:9469234 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f780b673-3062-4b39-b7d8-ba568aeee83c] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:38:12.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-48" for this suite. • [SLOW TEST:61.347 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":191,"skipped":3304,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:38:12.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 20 00:38:13.046: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:38:20.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6" for this suite. • [SLOW TEST:7.646 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":192,"skipped":3344,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:38:20.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 20 00:38:20.708: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc24fc11-0ddc-4136-9cf8-0078201f5972" in namespace "projected-2379" to be "Succeeded or Failed" Apr 20 00:38:20.712: INFO: Pod "downwardapi-volume-dc24fc11-0ddc-4136-9cf8-0078201f5972": Phase="Pending", Reason="", readiness=false. Elapsed: 3.444867ms Apr 20 00:38:22.716: INFO: Pod "downwardapi-volume-dc24fc11-0ddc-4136-9cf8-0078201f5972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007817201s Apr 20 00:38:24.721: INFO: Pod "downwardapi-volume-dc24fc11-0ddc-4136-9cf8-0078201f5972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01228638s STEP: Saw pod success Apr 20 00:38:24.721: INFO: Pod "downwardapi-volume-dc24fc11-0ddc-4136-9cf8-0078201f5972" satisfied condition "Succeeded or Failed" Apr 20 00:38:24.724: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-dc24fc11-0ddc-4136-9cf8-0078201f5972 container client-container: STEP: delete the pod Apr 20 00:38:24.778: INFO: Waiting for pod downwardapi-volume-dc24fc11-0ddc-4136-9cf8-0078201f5972 to disappear Apr 20 00:38:24.790: INFO: Pod downwardapi-volume-dc24fc11-0ddc-4136-9cf8-0078201f5972 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:38:24.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2379" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":193,"skipped":3381,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:38:24.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:38:24.861: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 20 00:38:24.868: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:24.901: INFO: Number of nodes with available pods: 0 Apr 20 00:38:24.901: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:38:25.908: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:25.911: INFO: Number of nodes with available pods: 0 Apr 20 00:38:25.911: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:38:26.986: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:26.990: INFO: Number of nodes with available pods: 0 Apr 20 00:38:26.990: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:38:27.906: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:27.909: INFO: Number of nodes with available pods: 0 Apr 20 00:38:27.909: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:38:28.906: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:28.926: INFO: Number of nodes with available pods: 2 Apr 20 00:38:28.926: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 20 00:38:28.951: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:28.951: INFO: Wrong image for pod: daemon-set-qkskl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:28.975: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:29.979: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:29.979: INFO: Wrong image for pod: daemon-set-qkskl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:29.982: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:30.992: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:30.992: INFO: Wrong image for pod: daemon-set-qkskl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:30.996: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:31.980: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:31.980: INFO: Wrong image for pod: daemon-set-qkskl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:31.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:32.980: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:32.980: INFO: Wrong image for pod: daemon-set-qkskl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:32.980: INFO: Pod daemon-set-qkskl is not available Apr 20 00:38:32.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:33.980: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:33.980: INFO: Wrong image for pod: daemon-set-qkskl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:33.980: INFO: Pod daemon-set-qkskl is not available Apr 20 00:38:33.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:34.979: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:34.979: INFO: Wrong image for pod: daemon-set-qkskl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:34.980: INFO: Pod daemon-set-qkskl is not available Apr 20 00:38:34.983: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:35.979: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:35.979: INFO: Wrong image for pod: daemon-set-qkskl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:35.979: INFO: Pod daemon-set-qkskl is not available Apr 20 00:38:35.982: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:36.979: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:36.979: INFO: Wrong image for pod: daemon-set-qkskl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:36.979: INFO: Pod daemon-set-qkskl is not available Apr 20 00:38:36.983: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:37.980: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:37.980: INFO: Wrong image for pod: daemon-set-qkskl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:37.980: INFO: Pod daemon-set-qkskl is not available Apr 20 00:38:37.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:38.979: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:38.979: INFO: Wrong image for pod: daemon-set-qkskl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:38.979: INFO: Pod daemon-set-qkskl is not available Apr 20 00:38:38.981: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:39.980: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:39.980: INFO: Wrong image for pod: daemon-set-qkskl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:39.980: INFO: Pod daemon-set-qkskl is not available Apr 20 00:38:39.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:40.980: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:40.980: INFO: Wrong image for pod: daemon-set-qkskl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:40.980: INFO: Pod daemon-set-qkskl is not available Apr 20 00:38:40.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:41.980: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:41.980: INFO: Wrong image for pod: daemon-set-qkskl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:41.980: INFO: Pod daemon-set-qkskl is not available Apr 20 00:38:41.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:42.979: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:42.980: INFO: Pod daemon-set-8n2dz is not available Apr 20 00:38:42.983: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:43.980: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:43.980: INFO: Pod daemon-set-8n2dz is not available Apr 20 00:38:43.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:44.980: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:44.980: INFO: Pod daemon-set-8n2dz is not available Apr 20 00:38:44.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:46.023: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:46.029: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:46.980: INFO: Wrong image for pod: daemon-set-7trgn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 20 00:38:46.980: INFO: Pod daemon-set-7trgn is not available Apr 20 00:38:46.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:47.980: INFO: Pod daemon-set-wqnpl is not available Apr 20 00:38:47.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 20 00:38:47.988: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:47.992: INFO: Number of nodes with available pods: 1 Apr 20 00:38:47.992: INFO: Node latest-worker2 is running more than one daemon pod Apr 20 00:38:48.996: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:49.000: INFO: Number of nodes with available pods: 1 Apr 20 00:38:49.000: INFO: Node latest-worker2 is running more than one daemon pod Apr 20 00:38:49.997: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:50.000: INFO: Number of nodes with available pods: 1 Apr 20 00:38:50.000: INFO: Node latest-worker2 is running more than one daemon pod Apr 20 00:38:50.997: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 00:38:51.001: INFO: Number of nodes with available pods: 2 Apr 20 00:38:51.001: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5487, will wait for the garbage collector to delete the pods Apr 20 00:38:51.076: INFO: Deleting DaemonSet.extensions daemon-set took: 7.147144ms Apr 20 00:38:51.376: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.202309ms Apr 20 00:39:03.099: INFO: Number of nodes with available pods: 0 Apr 20 00:39:03.099: INFO: Number of running nodes: 0, number of available pods: 0 Apr 20 00:39:03.102: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5487/daemonsets","resourceVersion":"9469564"},"items":null} Apr 20 00:39:03.105: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5487/pods","resourceVersion":"9469564"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:39:03.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5487" for this suite. • [SLOW TEST:38.322 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":194,"skipped":3381,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:39:03.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4355 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-4355 I0420 00:39:03.269456 8 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4355, replica count: 2 I0420 00:39:06.319936 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 00:39:09.320149 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 20 00:39:09.320: INFO: Creating new exec pod Apr 20 00:39:14.408: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4355 execpods8z2r -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 20 00:39:14.646: INFO: stderr: "I0420 00:39:14.540625 2285 log.go:172] (0xc000a87810) (0xc000a668c0) Create stream\nI0420 00:39:14.540691 2285 log.go:172] (0xc000a87810) (0xc000a668c0) Stream added, broadcasting: 1\nI0420 00:39:14.545604 2285 log.go:172] (0xc000a87810) Reply frame received for 1\nI0420 00:39:14.545645 2285 log.go:172] (0xc000a87810) (0xc0005eb5e0) Create stream\nI0420 00:39:14.545656 2285 log.go:172] (0xc000a87810) (0xc0005eb5e0) Stream added, broadcasting: 3\nI0420 00:39:14.546597 2285 log.go:172] (0xc000a87810) Reply frame received for 3\nI0420 00:39:14.546627 2285 log.go:172] (0xc000a87810) (0xc000448a00) Create stream\nI0420 00:39:14.546634 2285 log.go:172] (0xc000a87810) (0xc000448a00) Stream added, broadcasting: 5\nI0420 00:39:14.547461 2285 log.go:172] (0xc000a87810) Reply frame received for 5\nI0420 00:39:14.638452 2285 log.go:172] (0xc000a87810) Data frame received for 5\nI0420 00:39:14.638519 2285 log.go:172] (0xc000448a00) (5) Data frame handling\nI0420 00:39:14.638542 2285 log.go:172] (0xc000448a00) (5) Data frame sent\nI0420 00:39:14.638555 2285 log.go:172] (0xc000a87810) Data frame received for 5\nI0420 00:39:14.638578 2285 log.go:172] (0xc000448a00) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0420 00:39:14.638622 2285 log.go:172] (0xc000a87810) Data frame received for 3\nI0420 00:39:14.638674 2285 log.go:172] (0xc0005eb5e0) (3) Data frame handling\nI0420 00:39:14.640630 2285 log.go:172] (0xc000a87810) Data frame received for 1\nI0420 00:39:14.640671 2285 log.go:172] (0xc000a668c0) (1) Data frame handling\nI0420 00:39:14.640690 2285 log.go:172] (0xc000a668c0) (1) Data frame sent\nI0420 00:39:14.640706 2285 log.go:172] (0xc000a87810) (0xc000a668c0) Stream removed, broadcasting: 1\nI0420 00:39:14.640730 2285 log.go:172] (0xc000a87810) Go away received\nI0420 00:39:14.641416 2285 log.go:172] (0xc000a87810) (0xc000a668c0) Stream removed, broadcasting: 1\nI0420 00:39:14.641440 2285 log.go:172] (0xc000a87810) (0xc0005eb5e0) Stream removed, broadcasting: 3\nI0420 00:39:14.641453 2285 log.go:172] (0xc000a87810) (0xc000448a00) Stream removed, broadcasting: 5\n" Apr 20 00:39:14.646: INFO: stdout: "" Apr 20 00:39:14.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4355 execpods8z2r -- /bin/sh -x -c nc -zv -t -w 2 10.96.242.52 80' Apr 20 00:39:14.898: INFO: stderr: "I0420 00:39:14.783270 2308 log.go:172] (0xc00097a000) (0xc0004a2aa0) Create stream\nI0420 00:39:14.783327 2308 log.go:172] (0xc00097a000) (0xc0004a2aa0) Stream added, broadcasting: 1\nI0420 00:39:14.785857 2308 log.go:172] (0xc00097a000) Reply frame received for 1\nI0420 00:39:14.785919 2308 log.go:172] (0xc00097a000) (0xc0006e74a0) Create stream\nI0420 00:39:14.785933 2308 log.go:172] (0xc00097a000) (0xc0006e74a0) Stream added, broadcasting: 3\nI0420 00:39:14.787006 2308 log.go:172] (0xc00097a000) Reply frame received for 3\nI0420 00:39:14.787071 2308 log.go:172] (0xc00097a000) (0xc0004a2b40) Create stream\nI0420 00:39:14.787089 2308 log.go:172] (0xc00097a000) (0xc0004a2b40) Stream added, broadcasting: 5\nI0420 00:39:14.788203 2308 log.go:172] (0xc00097a000) Reply frame received for 5\nI0420 00:39:14.891059 2308 log.go:172] (0xc00097a000) Data frame received for 5\nI0420 00:39:14.891104 2308 log.go:172] (0xc0004a2b40) (5) Data frame handling\nI0420 00:39:14.891120 2308 log.go:172] (0xc0004a2b40) (5) Data frame sent\nI0420 00:39:14.891136 2308 log.go:172] (0xc00097a000) Data frame received for 5\nI0420 00:39:14.891153 2308 log.go:172] (0xc0004a2b40) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.242.52 80\nConnection to 10.96.242.52 80 port [tcp/http] succeeded!\nI0420 00:39:14.891204 2308 log.go:172] (0xc00097a000) Data frame received for 3\nI0420 00:39:14.891233 2308 log.go:172] (0xc0006e74a0) (3) Data frame handling\nI0420 00:39:14.892679 2308 log.go:172] (0xc00097a000) Data frame received for 1\nI0420 00:39:14.892694 2308 log.go:172] (0xc0004a2aa0) (1) Data frame handling\nI0420 00:39:14.892703 2308 log.go:172] (0xc0004a2aa0) (1) Data frame sent\nI0420 00:39:14.892714 2308 log.go:172] (0xc00097a000) (0xc0004a2aa0) Stream removed, broadcasting: 1\nI0420 00:39:14.892728 2308 log.go:172] (0xc00097a000) Go away received\nI0420 00:39:14.893289 2308 log.go:172] (0xc00097a000) (0xc0004a2aa0) Stream removed, broadcasting: 1\nI0420 00:39:14.893316 2308 log.go:172] (0xc00097a000) (0xc0006e74a0) Stream removed, broadcasting: 3\nI0420 00:39:14.893333 2308 log.go:172] (0xc00097a000) (0xc0004a2b40) Stream removed, broadcasting: 5\n" Apr 20 00:39:14.898: INFO: stdout: "" Apr 20 00:39:14.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4355 execpods8z2r -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30653' Apr 20 00:39:15.091: INFO: stderr: "I0420 00:39:15.025833 2328 log.go:172] (0xc00059f970) (0xc0009c0000) Create stream\nI0420 00:39:15.025892 2328 log.go:172] (0xc00059f970) (0xc0009c0000) Stream added, broadcasting: 1\nI0420 00:39:15.028793 2328 log.go:172] (0xc00059f970) Reply frame received for 1\nI0420 00:39:15.028857 2328 log.go:172] (0xc00059f970) (0xc000a3a000) Create stream\nI0420 00:39:15.028890 2328 log.go:172] (0xc00059f970) (0xc000a3a000) Stream added, broadcasting: 3\nI0420 00:39:15.030109 2328 log.go:172] (0xc00059f970) Reply frame received for 3\nI0420 00:39:15.030203 2328 log.go:172] (0xc00059f970) (0xc00071d220) Create stream\nI0420 00:39:15.030220 2328 log.go:172] (0xc00059f970) (0xc00071d220) Stream added, broadcasting: 5\nI0420 00:39:15.031267 2328 log.go:172] (0xc00059f970) Reply frame received for 5\nI0420 00:39:15.085549 2328 log.go:172] (0xc00059f970) Data frame received for 3\nI0420 00:39:15.085570 2328 log.go:172] (0xc000a3a000) (3) Data frame handling\nI0420 00:39:15.085593 2328 log.go:172] (0xc00059f970) Data frame received for 5\nI0420 00:39:15.085601 2328 log.go:172] (0xc00071d220) (5) Data frame handling\nI0420 00:39:15.085621 2328 log.go:172] (0xc00071d220) (5) Data frame sent\nI0420 00:39:15.085627 2328 log.go:172] (0xc00059f970) Data frame received for 5\nI0420 00:39:15.085633 2328 log.go:172] (0xc00071d220) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30653\nConnection to 172.17.0.13 30653 port [tcp/30653] succeeded!\nI0420 00:39:15.087302 2328 log.go:172] (0xc00059f970) Data frame received for 1\nI0420 00:39:15.087327 2328 log.go:172] (0xc0009c0000) (1) Data frame handling\nI0420 00:39:15.087348 2328 log.go:172] (0xc0009c0000) (1) Data frame sent\nI0420 00:39:15.087369 2328 log.go:172] (0xc00059f970) (0xc0009c0000) Stream removed, broadcasting: 1\nI0420 00:39:15.087503 2328 log.go:172] (0xc00059f970) Go away received\nI0420 00:39:15.087778 2328 log.go:172] (0xc00059f970) (0xc0009c0000) Stream removed, broadcasting: 1\nI0420 00:39:15.087791 2328 log.go:172] (0xc00059f970) (0xc000a3a000) Stream removed, broadcasting: 3\nI0420 00:39:15.087799 2328 log.go:172] (0xc00059f970) (0xc00071d220) Stream removed, broadcasting: 5\n" Apr 20 00:39:15.091: INFO: stdout: "" Apr 20 00:39:15.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4355 execpods8z2r -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30653' Apr 20 00:39:15.305: INFO: stderr: "I0420 00:39:15.228381 2349 log.go:172] (0xc0005bca50) (0xc000544320) Create stream\nI0420 00:39:15.228452 2349 log.go:172] (0xc0005bca50) (0xc000544320) Stream added, broadcasting: 1\nI0420 00:39:15.231101 2349 log.go:172] (0xc0005bca50) Reply frame received for 1\nI0420 00:39:15.231153 2349 log.go:172] (0xc0005bca50) (0xc0005443c0) Create stream\nI0420 00:39:15.231178 2349 log.go:172] (0xc0005bca50) (0xc0005443c0) Stream added, broadcasting: 3\nI0420 00:39:15.232008 2349 log.go:172] (0xc0005bca50) Reply frame received for 3\nI0420 00:39:15.232043 2349 log.go:172] (0xc0005bca50) (0xc0004770e0) Create stream\nI0420 00:39:15.232058 2349 log.go:172] (0xc0005bca50) (0xc0004770e0) Stream added, broadcasting: 5\nI0420 00:39:15.232800 2349 log.go:172] (0xc0005bca50) Reply frame received for 5\nI0420 00:39:15.297807 2349 log.go:172] (0xc0005bca50) Data frame received for 3\nI0420 00:39:15.297874 2349 log.go:172] (0xc0005443c0) (3) Data frame handling\nI0420 00:39:15.297917 2349 log.go:172] (0xc0005bca50) Data frame received for 5\nI0420 00:39:15.297946 2349 log.go:172] (0xc0004770e0) (5) Data frame handling\nI0420 00:39:15.297983 2349 log.go:172] (0xc0004770e0) (5) Data frame sent\nI0420 00:39:15.298007 2349 log.go:172] (0xc0005bca50) Data frame received for 5\nI0420 00:39:15.298018 2349 log.go:172] (0xc0004770e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30653\nConnection to 172.17.0.12 30653 port [tcp/30653] succeeded!\nI0420 00:39:15.300151 2349 log.go:172] (0xc0005bca50) Data frame received for 1\nI0420 00:39:15.300185 2349 log.go:172] (0xc000544320) (1) Data frame handling\nI0420 00:39:15.300206 2349 log.go:172] (0xc000544320) (1) Data frame sent\nI0420 00:39:15.300236 2349 log.go:172] (0xc0005bca50) (0xc000544320) Stream removed, broadcasting: 1\nI0420 00:39:15.300260 2349 log.go:172] (0xc0005bca50) Go away received\nI0420 00:39:15.300815 2349 log.go:172] (0xc0005bca50) (0xc000544320) Stream removed, broadcasting: 1\nI0420 00:39:15.300832 2349 log.go:172] (0xc0005bca50) (0xc0005443c0) Stream removed, broadcasting: 3\nI0420 00:39:15.300847 2349 log.go:172] (0xc0005bca50) (0xc0004770e0) Stream removed, broadcasting: 5\n" Apr 20 00:39:15.305: INFO: stdout: "" Apr 20 00:39:15.305: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:39:15.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4355" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.252 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":195,"skipped":3395,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:39:15.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 20 00:39:15.452: INFO: Waiting up to 5m0s for pod "pod-00ccf516-2272-4ec4-b118-5dc110f8a123" in namespace "emptydir-587" to be "Succeeded or Failed" Apr 20 00:39:15.458: INFO: Pod "pod-00ccf516-2272-4ec4-b118-5dc110f8a123": Phase="Pending", Reason="", readiness=false. Elapsed: 6.74235ms Apr 20 00:39:17.463: INFO: Pod "pod-00ccf516-2272-4ec4-b118-5dc110f8a123": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011028207s Apr 20 00:39:19.466: INFO: Pod "pod-00ccf516-2272-4ec4-b118-5dc110f8a123": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014698317s STEP: Saw pod success Apr 20 00:39:19.466: INFO: Pod "pod-00ccf516-2272-4ec4-b118-5dc110f8a123" satisfied condition "Succeeded or Failed" Apr 20 00:39:19.468: INFO: Trying to get logs from node latest-worker2 pod pod-00ccf516-2272-4ec4-b118-5dc110f8a123 container test-container: STEP: delete the pod Apr 20 00:39:19.502: INFO: Waiting for pod pod-00ccf516-2272-4ec4-b118-5dc110f8a123 to disappear Apr 20 00:39:19.510: INFO: Pod pod-00ccf516-2272-4ec4-b118-5dc110f8a123 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:39:19.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-587" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3420,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:39:19.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:39:30.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9872" for this suite. • [SLOW TEST:11.299 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":197,"skipped":3443,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:39:30.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:39:30.896: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Apr 20 00:39:31.017: INFO: stderr: "" Apr 20 00:39:31.017: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:39:31.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1526" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":198,"skipped":3469,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:39:31.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:39:31.102: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 20 00:39:31.107: INFO: Number of nodes with available pods: 0 Apr 20 00:39:31.107: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 20 00:39:31.206: INFO: Number of nodes with available pods: 0 Apr 20 00:39:31.206: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:39:32.210: INFO: Number of nodes with available pods: 0 Apr 20 00:39:32.210: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:39:33.211: INFO: Number of nodes with available pods: 0 Apr 20 00:39:33.211: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:39:34.210: INFO: Number of nodes with available pods: 1 Apr 20 00:39:34.210: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 20 00:39:34.274: INFO: Number of nodes with available pods: 1 Apr 20 00:39:34.274: INFO: Number of running nodes: 0, number of available pods: 1 Apr 20 00:39:35.298: INFO: Number of nodes with available pods: 0 Apr 20 00:39:35.298: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 20 00:39:35.313: INFO: Number of nodes with available pods: 0 Apr 20 00:39:35.313: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:39:36.316: INFO: Number of nodes with available pods: 0 Apr 20 00:39:36.316: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:39:37.318: INFO: Number of nodes with available pods: 0 Apr 20 00:39:37.318: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:39:38.317: INFO: Number of nodes with available pods: 0 Apr 20 00:39:38.317: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:39:39.317: INFO: Number of nodes with available pods: 0 Apr 20 00:39:39.317: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:39:40.317: INFO: Number of nodes with available pods: 0 Apr 20 00:39:40.317: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:39:41.317: INFO: Number of nodes with available pods: 0 Apr 20 00:39:41.317: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:39:42.318: INFO: Number of nodes with available pods: 0 Apr 20 00:39:42.318: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:39:43.317: INFO: Number of nodes with available pods: 0 Apr 20 00:39:43.317: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:39:44.316: INFO: Number of nodes with available pods: 0 Apr 20 00:39:44.316: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:39:45.317: INFO: Number of nodes with available pods: 0 Apr 20 00:39:45.317: INFO: Node latest-worker is running more than one daemon pod Apr 20 00:39:46.317: INFO: Number of nodes with available pods: 1 Apr 20 00:39:46.317: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4276, will wait for the garbage collector to delete the pods Apr 20 00:39:46.384: INFO: Deleting DaemonSet.extensions daemon-set took: 6.555999ms Apr 20 00:39:46.684: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.240617ms Apr 20 00:39:52.787: INFO: Number of nodes with available pods: 0 Apr 20 00:39:52.787: INFO: Number of running nodes: 0, number of available pods: 0 Apr 20 00:39:52.824: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4276/daemonsets","resourceVersion":"9469914"},"items":null} Apr 20 00:39:52.827: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4276/pods","resourceVersion":"9469914"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:39:52.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4276" for this suite. • [SLOW TEST:21.839 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":199,"skipped":3472,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:39:52.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-3db110e8-8bce-4327-92f8-b48520b20474 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:39:52.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5806" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":200,"skipped":3497,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:39:52.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:39:53.094: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 20 00:39:58.098: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 20 00:39:58.098: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 20 00:39:58.127: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6633 /apis/apps/v1/namespaces/deployment-6633/deployments/test-cleanup-deployment af3ea5d0-ad4d-4c85-a815-e1932f4ead21 9469969 1 2020-04-20 00:39:58 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0029371e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 20 00:39:58.246: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-6633 /apis/apps/v1/namespaces/deployment-6633/replicasets/test-cleanup-deployment-577c77b589 0c77f229-6a09-47af-acd7-c8e113c84449 9469978 1 2020-04-20 00:39:58 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment af3ea5d0-ad4d-4c85-a815-e1932f4ead21 0xc002937657 0xc002937658}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0029376c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 20 00:39:58.246: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 20 00:39:58.246: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6633 /apis/apps/v1/namespaces/deployment-6633/replicasets/test-cleanup-controller 722a2a7f-ecac-4edd-a5fa-fb3ab463f180 9469971 1 2020-04-20 00:39:53 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment af3ea5d0-ad4d-4c85-a815-e1932f4ead21 0xc002937587 0xc002937588}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0029375e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 20 00:39:58.255: INFO: Pod "test-cleanup-controller-4bqmv" is available: &Pod{ObjectMeta:{test-cleanup-controller-4bqmv test-cleanup-controller- deployment-6633 /api/v1/namespaces/deployment-6633/pods/test-cleanup-controller-4bqmv 4b1f4ef0-d76a-40a8-b71c-398674f978a0 9469947 0 2020-04-20 00:39:53 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 722a2a7f-ecac-4edd-a5fa-fb3ab463f180 0xc002937b87 0xc002937b88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c25pt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c25pt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c25pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:39:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:39:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:39:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:39:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.148,StartTime:2020-04-20 00:39:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-20 00:39:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3258851e488fe2a4a31746ead7d756c14d55932a539df014ac1adb8362d993d5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.148,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:39:58.255: INFO: Pod "test-cleanup-deployment-577c77b589-w2wx4" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-w2wx4 test-cleanup-deployment-577c77b589- deployment-6633 /api/v1/namespaces/deployment-6633/pods/test-cleanup-deployment-577c77b589-w2wx4 4bc0ad16-ea7a-4a8a-86f1-9c6f9411d6ae 9469979 0 2020-04-20 00:39:58 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 0c77f229-6a09-47af-acd7-c8e113c84449 0xc002937d17 0xc002937d18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c25pt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c25pt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c25pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:39:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:39:58.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6633" for this suite. • [SLOW TEST:5.314 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":201,"skipped":3522,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:39:58.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 20 00:39:58.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9449' Apr 20 00:39:58.611: INFO: stderr: "" Apr 20 00:39:58.611: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 20 00:39:59.675: INFO: Selector matched 1 pods for map[app:agnhost] Apr 20 00:39:59.676: INFO: Found 0 / 1 Apr 20 00:40:00.616: INFO: Selector matched 1 pods for map[app:agnhost] Apr 20 00:40:00.616: INFO: Found 0 / 1 Apr 20 00:40:01.615: INFO: Selector matched 1 pods for map[app:agnhost] Apr 20 00:40:01.615: INFO: Found 0 / 1 Apr 20 00:40:02.616: INFO: Selector matched 1 pods for map[app:agnhost] Apr 20 00:40:02.616: INFO: Found 1 / 1 Apr 20 00:40:02.616: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 20 00:40:02.619: INFO: Selector matched 1 pods for map[app:agnhost] Apr 20 00:40:02.619: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 20 00:40:02.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-jxrnh --namespace=kubectl-9449 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 20 00:40:02.730: INFO: stderr: "" Apr 20 00:40:02.730: INFO: stdout: "pod/agnhost-master-jxrnh patched\n" STEP: checking annotations Apr 20 00:40:02.741: INFO: Selector matched 1 pods for map[app:agnhost] Apr 20 00:40:02.741: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:40:02.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9449" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":202,"skipped":3535,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:40:02.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:40:02.852: INFO: The status of Pod test-webserver-9a388b97-f502-414f-9224-54ae20d6fb68 is Pending, waiting for it to be Running (with Ready = true) Apr 20 00:40:04.866: INFO: The status of Pod test-webserver-9a388b97-f502-414f-9224-54ae20d6fb68 is Pending, waiting for it to be Running (with Ready = true) Apr 20 00:40:06.856: INFO: The status of Pod test-webserver-9a388b97-f502-414f-9224-54ae20d6fb68 is Running (Ready = false) Apr 20 00:40:08.855: INFO: The status of Pod test-webserver-9a388b97-f502-414f-9224-54ae20d6fb68 is Running (Ready = false) Apr 20 00:40:10.855: INFO: The status of Pod test-webserver-9a388b97-f502-414f-9224-54ae20d6fb68 is Running (Ready = false) Apr 20 00:40:12.856: INFO: The status of Pod test-webserver-9a388b97-f502-414f-9224-54ae20d6fb68 is Running (Ready = false) Apr 20 00:40:14.856: INFO: The status of Pod test-webserver-9a388b97-f502-414f-9224-54ae20d6fb68 is Running (Ready = false) Apr 20 00:40:16.856: INFO: The status of Pod test-webserver-9a388b97-f502-414f-9224-54ae20d6fb68 is Running (Ready = false) Apr 20 00:40:18.856: INFO: The status of Pod test-webserver-9a388b97-f502-414f-9224-54ae20d6fb68 is Running (Ready = false) Apr 20 00:40:20.856: INFO: The status of Pod test-webserver-9a388b97-f502-414f-9224-54ae20d6fb68 is Running (Ready = false) Apr 20 00:40:22.855: INFO: The status of Pod test-webserver-9a388b97-f502-414f-9224-54ae20d6fb68 is Running (Ready = true) Apr 20 00:40:22.858: INFO: Container started at 2020-04-20 00:40:05 +0000 UTC, pod became ready at 2020-04-20 00:40:22 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:40:22.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5002" for this suite. • [SLOW TEST:20.118 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3547,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:40:22.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-42ad78b1-aef2-447b-bcc3-24c66295fc30 STEP: Creating a pod to test consume secrets Apr 20 00:40:22.966: INFO: Waiting up to 5m0s for pod "pod-secrets-9da39751-4148-49cc-b972-8d3091061097" in namespace "secrets-4748" to be "Succeeded or Failed" Apr 20 00:40:22.975: INFO: Pod "pod-secrets-9da39751-4148-49cc-b972-8d3091061097": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147495ms Apr 20 00:40:24.979: INFO: Pod "pod-secrets-9da39751-4148-49cc-b972-8d3091061097": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012330379s Apr 20 00:40:26.983: INFO: Pod "pod-secrets-9da39751-4148-49cc-b972-8d3091061097": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016626421s STEP: Saw pod success Apr 20 00:40:26.983: INFO: Pod "pod-secrets-9da39751-4148-49cc-b972-8d3091061097" satisfied condition "Succeeded or Failed" Apr 20 00:40:26.986: INFO: Trying to get logs from node latest-worker pod pod-secrets-9da39751-4148-49cc-b972-8d3091061097 container secret-volume-test: STEP: delete the pod Apr 20 00:40:27.060: INFO: Waiting for pod pod-secrets-9da39751-4148-49cc-b972-8d3091061097 to disappear Apr 20 00:40:27.067: INFO: Pod pod-secrets-9da39751-4148-49cc-b972-8d3091061097 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:40:27.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4748" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":204,"skipped":3552,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:40:27.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 20 00:40:28.029: INFO: Pod name wrapped-volume-race-ff1110b9-d076-4492-a2ab-b8ac80a84eba: Found 0 pods out of 5 Apr 20 00:40:33.038: INFO: Pod name wrapped-volume-race-ff1110b9-d076-4492-a2ab-b8ac80a84eba: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ff1110b9-d076-4492-a2ab-b8ac80a84eba in namespace emptydir-wrapper-2248, will wait for the garbage collector to delete the pods Apr 20 00:40:47.296: INFO: Deleting ReplicationController wrapped-volume-race-ff1110b9-d076-4492-a2ab-b8ac80a84eba took: 31.885727ms Apr 20 00:40:47.596: INFO: Terminating ReplicationController wrapped-volume-race-ff1110b9-d076-4492-a2ab-b8ac80a84eba pods took: 300.266776ms STEP: Creating RC which spawns configmap-volume pods Apr 20 00:41:03.130: INFO: Pod name wrapped-volume-race-f5ce67a9-faac-4ea8-b150-be26e2683a86: Found 0 pods out of 5 Apr 20 00:41:08.138: INFO: Pod name wrapped-volume-race-f5ce67a9-faac-4ea8-b150-be26e2683a86: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f5ce67a9-faac-4ea8-b150-be26e2683a86 in namespace emptydir-wrapper-2248, will wait for the garbage collector to delete the pods Apr 20 00:41:22.222: INFO: Deleting ReplicationController wrapped-volume-race-f5ce67a9-faac-4ea8-b150-be26e2683a86 took: 10.586436ms Apr 20 00:41:22.623: INFO: Terminating ReplicationController wrapped-volume-race-f5ce67a9-faac-4ea8-b150-be26e2683a86 pods took: 400.299505ms STEP: Creating RC which spawns configmap-volume pods Apr 20 00:41:34.048: INFO: Pod name wrapped-volume-race-369ab9af-9053-4846-a9c9-d0f28d93b301: Found 0 pods out of 5 Apr 20 00:41:39.056: INFO: Pod name wrapped-volume-race-369ab9af-9053-4846-a9c9-d0f28d93b301: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-369ab9af-9053-4846-a9c9-d0f28d93b301 in namespace emptydir-wrapper-2248, will wait for the garbage collector to delete the pods Apr 20 00:41:53.140: INFO: Deleting ReplicationController wrapped-volume-race-369ab9af-9053-4846-a9c9-d0f28d93b301 took: 7.95401ms Apr 20 00:41:53.541: INFO: Terminating ReplicationController wrapped-volume-race-369ab9af-9053-4846-a9c9-d0f28d93b301 pods took: 400.288768ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:42:04.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2248" for this suite. • [SLOW TEST:97.465 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":205,"skipped":3562,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:42:04.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 20 00:42:04.586: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 20 00:42:04.608: INFO: Waiting for terminating namespaces to be deleted... Apr 20 00:42:04.611: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 20 00:42:04.625: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:42:04.625: INFO: Container kindnet-cni ready: true, restart count 0 Apr 20 00:42:04.625: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:42:04.625: INFO: Container kube-proxy ready: true, restart count 0 Apr 20 00:42:04.625: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 20 00:42:04.664: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:42:04.664: INFO: Container kindnet-cni ready: true, restart count 0 Apr 20 00:42:04.664: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:42:04.664: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8cbdd1ed-eefa-4d14-96bc-53044c2df886 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-8cbdd1ed-eefa-4d14-96bc-53044c2df886 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-8cbdd1ed-eefa-4d14-96bc-53044c2df886 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:42:12.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6795" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.278 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":206,"skipped":3574,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:42:12.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 20 00:42:19.439: INFO: Successfully updated pod "adopt-release-ttsb7" STEP: Checking that the Job readopts the Pod Apr 20 00:42:19.439: INFO: Waiting up to 15m0s for pod "adopt-release-ttsb7" in namespace "job-4497" to be "adopted" Apr 20 00:42:19.460: INFO: Pod "adopt-release-ttsb7": Phase="Running", Reason="", readiness=true. Elapsed: 21.459567ms Apr 20 00:42:21.465: INFO: Pod "adopt-release-ttsb7": Phase="Running", Reason="", readiness=true. Elapsed: 2.025875697s Apr 20 00:42:21.465: INFO: Pod "adopt-release-ttsb7" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 20 00:42:21.973: INFO: Successfully updated pod "adopt-release-ttsb7" STEP: Checking that the Job releases the Pod Apr 20 00:42:21.973: INFO: Waiting up to 15m0s for pod "adopt-release-ttsb7" in namespace "job-4497" to be "released" Apr 20 00:42:22.006: INFO: Pod "adopt-release-ttsb7": Phase="Running", Reason="", readiness=true. Elapsed: 32.983081ms Apr 20 00:42:24.010: INFO: Pod "adopt-release-ttsb7": Phase="Running", Reason="", readiness=true. Elapsed: 2.036788944s Apr 20 00:42:24.010: INFO: Pod "adopt-release-ttsb7" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:42:24.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4497" for this suite. • [SLOW TEST:11.219 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":207,"skipped":3612,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:42:24.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Apr 20 00:42:24.083: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:42:24.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1426" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":208,"skipped":3666,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:42:24.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:42:31.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9862" for this suite. • [SLOW TEST:7.107 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":209,"skipped":3685,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:42:31.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:42:35.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5257" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3686,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:42:35.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:42:35.500: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:42:39.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1030" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3695,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:42:39.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-a888dce6-f44c-467b-8429-01c5ee13a953 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:42:43.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3426" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3725,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:42:43.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0420 00:42:53.885381 8 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 20 00:42:53.885: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:42:53.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1604" for this suite. • [SLOW TEST:10.108 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":213,"skipped":3739,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:42:53.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 20 00:43:01.998: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 00:43:02.020: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 00:43:04.021: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 00:43:04.025: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 00:43:06.021: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 00:43:06.025: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:43:06.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5312" for this suite. • [SLOW TEST:12.140 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3760,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:43:06.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 20 00:43:10.146: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:43:10.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7137" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3767,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:43:10.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0420 00:43:50.802609 8 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 20 00:43:50.802: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:43:50.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3418" for this suite. • [SLOW TEST:40.644 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":216,"skipped":3771,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:43:50.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 20 00:43:51.186: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Apr 20 00:43:53.196: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940231, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940231, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940231, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940231, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 00:43:55.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940231, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940231, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940231, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940231, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 20 00:43:59.094: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:44:11.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2604" for this suite. STEP: Destroying namespace "webhook-2604-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.741 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":217,"skipped":3774,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:44:11.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:44:11.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2673" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":218,"skipped":3776,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:44:11.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 20 00:44:16.866: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:44:16.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9782" for this suite. • [SLOW TEST:5.230 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":219,"skipped":3781,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:44:16.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 20 00:44:17.049: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:44:22.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3667" for this suite. • [SLOW TEST:6.006 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":220,"skipped":3834,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:44:22.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 20 00:44:24.143: INFO: Waiting up to 5m0s for pod "pod-bf963744-5067-4b38-aded-ee4699c44ac0" in namespace "emptydir-1818" to be "Succeeded or Failed" Apr 20 00:44:24.191: INFO: Pod "pod-bf963744-5067-4b38-aded-ee4699c44ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 47.931887ms Apr 20 00:44:26.243: INFO: Pod "pod-bf963744-5067-4b38-aded-ee4699c44ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099610418s Apr 20 00:44:28.298: INFO: Pod "pod-bf963744-5067-4b38-aded-ee4699c44ac0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154785515s STEP: Saw pod success Apr 20 00:44:28.298: INFO: Pod "pod-bf963744-5067-4b38-aded-ee4699c44ac0" satisfied condition "Succeeded or Failed" Apr 20 00:44:28.347: INFO: Trying to get logs from node latest-worker pod pod-bf963744-5067-4b38-aded-ee4699c44ac0 container test-container: STEP: delete the pod Apr 20 00:44:28.648: INFO: Waiting for pod pod-bf963744-5067-4b38-aded-ee4699c44ac0 to disappear Apr 20 00:44:28.658: INFO: Pod pod-bf963744-5067-4b38-aded-ee4699c44ac0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:44:28.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1818" for this suite. • [SLOW TEST:5.671 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3881,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:44:28.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 20 00:44:28.731: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40c4d4ec-b5b9-42eb-ad08-4de1c8ad689a" in namespace "projected-79" to be "Succeeded or Failed" Apr 20 00:44:28.775: INFO: Pod "downwardapi-volume-40c4d4ec-b5b9-42eb-ad08-4de1c8ad689a": Phase="Pending", Reason="", readiness=false. Elapsed: 43.853369ms Apr 20 00:44:30.779: INFO: Pod "downwardapi-volume-40c4d4ec-b5b9-42eb-ad08-4de1c8ad689a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047907313s Apr 20 00:44:32.784: INFO: Pod "downwardapi-volume-40c4d4ec-b5b9-42eb-ad08-4de1c8ad689a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052324705s STEP: Saw pod success Apr 20 00:44:32.784: INFO: Pod "downwardapi-volume-40c4d4ec-b5b9-42eb-ad08-4de1c8ad689a" satisfied condition "Succeeded or Failed" Apr 20 00:44:32.787: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-40c4d4ec-b5b9-42eb-ad08-4de1c8ad689a container client-container: STEP: delete the pod Apr 20 00:44:32.849: INFO: Waiting for pod downwardapi-volume-40c4d4ec-b5b9-42eb-ad08-4de1c8ad689a to disappear Apr 20 00:44:32.856: INFO: Pod downwardapi-volume-40c4d4ec-b5b9-42eb-ad08-4de1c8ad689a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:44:32.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-79" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3892,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:44:32.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 20 00:44:33.498: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 20 00:44:35.509: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940273, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940273, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940273, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940273, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 20 00:44:38.527: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:44:39.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9851" for this suite. STEP: Destroying namespace "webhook-9851-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.271 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":223,"skipped":3923,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:44:39.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 20 00:44:39.188: INFO: Waiting up to 5m0s for pod "downwardapi-volume-002f074b-3972-4e4c-a17b-4e4405a42faf" in namespace "downward-api-6723" to be "Succeeded or Failed" Apr 20 00:44:39.191: INFO: Pod "downwardapi-volume-002f074b-3972-4e4c-a17b-4e4405a42faf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.539073ms Apr 20 00:44:41.202: INFO: Pod "downwardapi-volume-002f074b-3972-4e4c-a17b-4e4405a42faf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014269436s Apr 20 00:44:43.207: INFO: Pod "downwardapi-volume-002f074b-3972-4e4c-a17b-4e4405a42faf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019232319s STEP: Saw pod success Apr 20 00:44:43.207: INFO: Pod "downwardapi-volume-002f074b-3972-4e4c-a17b-4e4405a42faf" satisfied condition "Succeeded or Failed" Apr 20 00:44:43.210: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-002f074b-3972-4e4c-a17b-4e4405a42faf container client-container: STEP: delete the pod Apr 20 00:44:43.291: INFO: Waiting for pod downwardapi-volume-002f074b-3972-4e4c-a17b-4e4405a42faf to disappear Apr 20 00:44:43.299: INFO: Pod downwardapi-volume-002f074b-3972-4e4c-a17b-4e4405a42faf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:44:43.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6723" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3925,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:44:43.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:44:58.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3457" for this suite. STEP: Destroying namespace "nsdeletetest-2292" for this suite. Apr 20 00:44:58.556: INFO: Namespace nsdeletetest-2292 was already deleted STEP: Destroying namespace "nsdeletetest-8355" for this suite. • [SLOW TEST:15.254 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":225,"skipped":3937,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:44:58.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:45:04.677: INFO: Waiting up to 5m0s for pod "client-envvars-f83ddb1d-cc20-49ca-85ba-d51cbfaba10d" in namespace "pods-4207" to be "Succeeded or Failed" Apr 20 00:45:04.683: INFO: Pod "client-envvars-f83ddb1d-cc20-49ca-85ba-d51cbfaba10d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.770685ms Apr 20 00:45:06.687: INFO: Pod "client-envvars-f83ddb1d-cc20-49ca-85ba-d51cbfaba10d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009672445s Apr 20 00:45:08.691: INFO: Pod "client-envvars-f83ddb1d-cc20-49ca-85ba-d51cbfaba10d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01408584s STEP: Saw pod success Apr 20 00:45:08.691: INFO: Pod "client-envvars-f83ddb1d-cc20-49ca-85ba-d51cbfaba10d" satisfied condition "Succeeded or Failed" Apr 20 00:45:08.694: INFO: Trying to get logs from node latest-worker pod client-envvars-f83ddb1d-cc20-49ca-85ba-d51cbfaba10d container env3cont: STEP: delete the pod Apr 20 00:45:08.715: INFO: Waiting for pod client-envvars-f83ddb1d-cc20-49ca-85ba-d51cbfaba10d to disappear Apr 20 00:45:08.725: INFO: Pod client-envvars-f83ddb1d-cc20-49ca-85ba-d51cbfaba10d no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:45:08.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4207" for this suite. • [SLOW TEST:10.174 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3938,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:45:08.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:45:08.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7580" for this suite. STEP: Destroying namespace "nspatchtest-4b792119-91b0-433e-a000-61629749f589-3488" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":227,"skipped":3951,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:45:08.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0420 00:45:21.466564 8 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 20 00:45:21.466: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:45:21.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-969" for this suite. • [SLOW TEST:12.506 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":228,"skipped":3958,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:45:21.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-7b95 STEP: Creating a pod to test atomic-volume-subpath Apr 20 00:45:21.663: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7b95" in namespace "subpath-8799" to be "Succeeded or Failed" Apr 20 00:45:21.691: INFO: Pod "pod-subpath-test-configmap-7b95": Phase="Pending", Reason="", readiness=false. Elapsed: 28.604479ms Apr 20 00:45:23.696: INFO: Pod "pod-subpath-test-configmap-7b95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032953519s Apr 20 00:45:25.700: INFO: Pod "pod-subpath-test-configmap-7b95": Phase="Running", Reason="", readiness=true. Elapsed: 4.03746663s Apr 20 00:45:27.705: INFO: Pod "pod-subpath-test-configmap-7b95": Phase="Running", Reason="", readiness=true. Elapsed: 6.041724766s Apr 20 00:45:29.708: INFO: Pod "pod-subpath-test-configmap-7b95": Phase="Running", Reason="", readiness=true. Elapsed: 8.04528631s Apr 20 00:45:31.712: INFO: Pod "pod-subpath-test-configmap-7b95": Phase="Running", Reason="", readiness=true. Elapsed: 10.049123892s Apr 20 00:45:33.715: INFO: Pod "pod-subpath-test-configmap-7b95": Phase="Running", Reason="", readiness=true. Elapsed: 12.052428728s Apr 20 00:45:35.719: INFO: Pod "pod-subpath-test-configmap-7b95": Phase="Running", Reason="", readiness=true. Elapsed: 14.056037214s Apr 20 00:45:37.723: INFO: Pod "pod-subpath-test-configmap-7b95": Phase="Running", Reason="", readiness=true. Elapsed: 16.060022896s Apr 20 00:45:39.727: INFO: Pod "pod-subpath-test-configmap-7b95": Phase="Running", Reason="", readiness=true. Elapsed: 18.064164957s Apr 20 00:45:41.731: INFO: Pod "pod-subpath-test-configmap-7b95": Phase="Running", Reason="", readiness=true. Elapsed: 20.068299328s Apr 20 00:45:43.735: INFO: Pod "pod-subpath-test-configmap-7b95": Phase="Running", Reason="", readiness=true. Elapsed: 22.072295734s Apr 20 00:45:45.740: INFO: Pod "pod-subpath-test-configmap-7b95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.076772398s STEP: Saw pod success Apr 20 00:45:45.740: INFO: Pod "pod-subpath-test-configmap-7b95" satisfied condition "Succeeded or Failed" Apr 20 00:45:45.742: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-7b95 container test-container-subpath-configmap-7b95: STEP: delete the pod Apr 20 00:45:45.818: INFO: Waiting for pod pod-subpath-test-configmap-7b95 to disappear Apr 20 00:45:45.822: INFO: Pod pod-subpath-test-configmap-7b95 no longer exists STEP: Deleting pod pod-subpath-test-configmap-7b95 Apr 20 00:45:45.822: INFO: Deleting pod "pod-subpath-test-configmap-7b95" in namespace "subpath-8799" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:45:45.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8799" for this suite. • [SLOW TEST:24.359 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":229,"skipped":3988,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:45:45.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 20 00:45:45.905: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f268e3fc-565f-4d31-94d2-91e710f3030b" in namespace "downward-api-6597" to be "Succeeded or Failed" Apr 20 00:45:45.912: INFO: Pod "downwardapi-volume-f268e3fc-565f-4d31-94d2-91e710f3030b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.575238ms Apr 20 00:45:47.916: INFO: Pod "downwardapi-volume-f268e3fc-565f-4d31-94d2-91e710f3030b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010733363s Apr 20 00:45:49.921: INFO: Pod "downwardapi-volume-f268e3fc-565f-4d31-94d2-91e710f3030b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015525106s STEP: Saw pod success Apr 20 00:45:49.921: INFO: Pod "downwardapi-volume-f268e3fc-565f-4d31-94d2-91e710f3030b" satisfied condition "Succeeded or Failed" Apr 20 00:45:49.924: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f268e3fc-565f-4d31-94d2-91e710f3030b container client-container: STEP: delete the pod Apr 20 00:45:49.958: INFO: Waiting for pod downwardapi-volume-f268e3fc-565f-4d31-94d2-91e710f3030b to disappear Apr 20 00:45:49.972: INFO: Pod downwardapi-volume-f268e3fc-565f-4d31-94d2-91e710f3030b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:45:49.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6597" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3989,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:45:49.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-612dd2ed-7abc-4abb-80a0-56e231de3380 STEP: Creating a pod to test consume configMaps Apr 20 00:45:50.070: INFO: Waiting up to 5m0s for pod "pod-configmaps-a41ab513-7253-471f-8eb1-bbfb8af53ee0" in namespace "configmap-5486" to be "Succeeded or Failed" Apr 20 00:45:50.087: INFO: Pod "pod-configmaps-a41ab513-7253-471f-8eb1-bbfb8af53ee0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.500032ms Apr 20 00:45:52.165: INFO: Pod "pod-configmaps-a41ab513-7253-471f-8eb1-bbfb8af53ee0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094935531s Apr 20 00:45:54.169: INFO: Pod "pod-configmaps-a41ab513-7253-471f-8eb1-bbfb8af53ee0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098893816s STEP: Saw pod success Apr 20 00:45:54.169: INFO: Pod "pod-configmaps-a41ab513-7253-471f-8eb1-bbfb8af53ee0" satisfied condition "Succeeded or Failed" Apr 20 00:45:54.172: INFO: Trying to get logs from node latest-worker pod pod-configmaps-a41ab513-7253-471f-8eb1-bbfb8af53ee0 container configmap-volume-test: STEP: delete the pod Apr 20 00:45:54.188: INFO: Waiting for pod pod-configmaps-a41ab513-7253-471f-8eb1-bbfb8af53ee0 to disappear Apr 20 00:45:54.193: INFO: Pod pod-configmaps-a41ab513-7253-471f-8eb1-bbfb8af53ee0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:45:54.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5486" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":4008,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:45:54.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 20 00:46:02.315: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 20 00:46:02.325: INFO: Pod pod-with-poststart-http-hook still exists Apr 20 00:46:04.325: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 20 00:46:04.329: INFO: Pod pod-with-poststart-http-hook still exists Apr 20 00:46:06.325: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 20 00:46:06.329: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:46:06.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3762" for this suite. • [SLOW TEST:12.136 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":4043,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:46:06.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 20 00:46:06.400: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 20 00:46:06.406: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 20 00:46:06.406: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 20 00:46:06.419: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 20 00:46:06.419: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 20 00:46:06.478: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 20 00:46:06.478: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 20 00:46:13.787: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:46:13.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-5632" for this suite. • [SLOW TEST:7.484 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":233,"skipped":4072,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:46:13.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:46:14.030: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 20 00:46:16.062: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:46:16.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3722" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":234,"skipped":4081,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:46:16.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0420 00:46:17.319023 8 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 20 00:46:17.319: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:46:17.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8358" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":235,"skipped":4085,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:46:17.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:46:50.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4198" for this suite. • [SLOW TEST:32.961 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":4087,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:46:50.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Apr 20 00:46:50.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5954' Apr 20 00:46:52.970: INFO: stderr: "" Apr 20 00:46:52.970: INFO: stdout: "pod/pause created\n" Apr 20 00:46:52.970: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 20 00:46:52.970: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5954" to be "running and ready" Apr 20 00:46:52.992: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 21.818762ms Apr 20 00:46:55.016: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045833606s Apr 20 00:46:57.021: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.050082718s Apr 20 00:46:57.021: INFO: Pod "pause" satisfied condition "running and ready" Apr 20 00:46:57.021: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Apr 20 00:46:57.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5954' Apr 20 00:46:57.129: INFO: stderr: "" Apr 20 00:46:57.129: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 20 00:46:57.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5954' Apr 20 00:46:57.220: INFO: stderr: "" Apr 20 00:46:57.220: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 20 00:46:57.220: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5954' Apr 20 00:46:57.313: INFO: stderr: "" Apr 20 00:46:57.313: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 20 00:46:57.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5954' Apr 20 00:46:57.417: INFO: stderr: "" Apr 20 00:46:57.417: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Apr 20 00:46:57.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5954' Apr 20 00:46:57.600: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 20 00:46:57.600: INFO: stdout: "pod \"pause\" force deleted\n" Apr 20 00:46:57.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5954' Apr 20 00:46:57.774: INFO: stderr: "No resources found in kubectl-5954 namespace.\n" Apr 20 00:46:57.774: INFO: stdout: "" Apr 20 00:46:57.774: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5954 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 20 00:46:57.859: INFO: stderr: "" Apr 20 00:46:57.859: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:46:57.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5954" for this suite. • [SLOW TEST:7.617 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":237,"skipped":4088,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:46:57.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:47:02.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1321" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":4111,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:47:02.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 20 00:47:02.187: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61e6eff0-66e8-4bf8-8672-faa46825999e" in namespace "downward-api-2087" to be "Succeeded or Failed" Apr 20 00:47:02.209: INFO: Pod "downwardapi-volume-61e6eff0-66e8-4bf8-8672-faa46825999e": Phase="Pending", Reason="", readiness=false. Elapsed: 22.017412ms Apr 20 00:47:04.213: INFO: Pod "downwardapi-volume-61e6eff0-66e8-4bf8-8672-faa46825999e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025844496s Apr 20 00:47:06.218: INFO: Pod "downwardapi-volume-61e6eff0-66e8-4bf8-8672-faa46825999e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030855836s STEP: Saw pod success Apr 20 00:47:06.218: INFO: Pod "downwardapi-volume-61e6eff0-66e8-4bf8-8672-faa46825999e" satisfied condition "Succeeded or Failed" Apr 20 00:47:06.221: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-61e6eff0-66e8-4bf8-8672-faa46825999e container client-container: STEP: delete the pod Apr 20 00:47:06.293: INFO: Waiting for pod downwardapi-volume-61e6eff0-66e8-4bf8-8672-faa46825999e to disappear Apr 20 00:47:06.308: INFO: Pod downwardapi-volume-61e6eff0-66e8-4bf8-8672-faa46825999e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:47:06.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2087" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4114,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:47:06.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-15e2bf83-dfac-4f72-88bb-77c328c33de0 Apr 20 00:47:06.418: INFO: Pod name my-hostname-basic-15e2bf83-dfac-4f72-88bb-77c328c33de0: Found 0 pods out of 1 Apr 20 00:47:11.421: INFO: Pod name my-hostname-basic-15e2bf83-dfac-4f72-88bb-77c328c33de0: Found 1 pods out of 1 Apr 20 00:47:11.421: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-15e2bf83-dfac-4f72-88bb-77c328c33de0" are running Apr 20 00:47:11.423: INFO: Pod "my-hostname-basic-15e2bf83-dfac-4f72-88bb-77c328c33de0-vr4xj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-20 00:47:06 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-20 00:47:10 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-20 00:47:10 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-20 00:47:06 +0000 UTC Reason: Message:}]) Apr 20 00:47:11.423: INFO: Trying to dial the pod Apr 20 00:47:16.435: INFO: Controller my-hostname-basic-15e2bf83-dfac-4f72-88bb-77c328c33de0: Got expected result from replica 1 [my-hostname-basic-15e2bf83-dfac-4f72-88bb-77c328c33de0-vr4xj]: "my-hostname-basic-15e2bf83-dfac-4f72-88bb-77c328c33de0-vr4xj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:47:16.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9647" for this suite. • [SLOW TEST:10.128 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":240,"skipped":4114,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:47:16.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 20 00:47:16.505: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 20 00:47:16.529: INFO: Waiting for terminating namespaces to be deleted... Apr 20 00:47:16.531: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 20 00:47:16.553: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:47:16.553: INFO: Container kindnet-cni ready: true, restart count 0 Apr 20 00:47:16.553: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:47:16.553: INFO: Container kube-proxy ready: true, restart count 0 Apr 20 00:47:16.553: INFO: my-hostname-basic-15e2bf83-dfac-4f72-88bb-77c328c33de0-vr4xj from replication-controller-9647 started at 2020-04-20 00:47:06 +0000 UTC (1 container statuses recorded) Apr 20 00:47:16.553: INFO: Container my-hostname-basic-15e2bf83-dfac-4f72-88bb-77c328c33de0 ready: true, restart count 0 Apr 20 00:47:16.553: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 20 00:47:16.559: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:47:16.559: INFO: Container kindnet-cni ready: true, restart count 0 Apr 20 00:47:16.559: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 20 00:47:16.559: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160760a09a77d2b5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.160760a09b35e25d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:47:17.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2377" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":241,"skipped":4152,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:47:17.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:47:17.674: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:47:18.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2932" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":242,"skipped":4194,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:47:18.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 20 00:47:18.901: INFO: Waiting up to 5m0s for pod "pod-1f563a4e-275f-44eb-b982-e2886a4c803e" in namespace "emptydir-1483" to be "Succeeded or Failed" Apr 20 00:47:18.913: INFO: Pod "pod-1f563a4e-275f-44eb-b982-e2886a4c803e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.147394ms Apr 20 00:47:21.634: INFO: Pod "pod-1f563a4e-275f-44eb-b982-e2886a4c803e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.733503838s Apr 20 00:47:23.639: INFO: Pod "pod-1f563a4e-275f-44eb-b982-e2886a4c803e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.737977271s Apr 20 00:47:25.643: INFO: Pod "pod-1f563a4e-275f-44eb-b982-e2886a4c803e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.742524673s STEP: Saw pod success Apr 20 00:47:25.643: INFO: Pod "pod-1f563a4e-275f-44eb-b982-e2886a4c803e" satisfied condition "Succeeded or Failed" Apr 20 00:47:25.647: INFO: Trying to get logs from node latest-worker pod pod-1f563a4e-275f-44eb-b982-e2886a4c803e container test-container: STEP: delete the pod Apr 20 00:47:25.669: INFO: Waiting for pod pod-1f563a4e-275f-44eb-b982-e2886a4c803e to disappear Apr 20 00:47:25.674: INFO: Pod pod-1f563a4e-275f-44eb-b982-e2886a4c803e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:47:25.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1483" for this suite. • [SLOW TEST:6.828 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":4197,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:47:25.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:47:25.772: INFO: Creating ReplicaSet my-hostname-basic-57e76d42-a073-4078-9902-a5f27cdffccf Apr 20 00:47:25.800: INFO: Pod name my-hostname-basic-57e76d42-a073-4078-9902-a5f27cdffccf: Found 0 pods out of 1 Apr 20 00:47:30.804: INFO: Pod name my-hostname-basic-57e76d42-a073-4078-9902-a5f27cdffccf: Found 1 pods out of 1 Apr 20 00:47:30.804: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-57e76d42-a073-4078-9902-a5f27cdffccf" is running Apr 20 00:47:30.807: INFO: Pod "my-hostname-basic-57e76d42-a073-4078-9902-a5f27cdffccf-lshq8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-20 00:47:25 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-20 00:47:28 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-20 00:47:28 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-20 00:47:25 +0000 UTC Reason: Message:}]) Apr 20 00:47:30.807: INFO: Trying to dial the pod Apr 20 00:47:35.826: INFO: Controller my-hostname-basic-57e76d42-a073-4078-9902-a5f27cdffccf: Got expected result from replica 1 [my-hostname-basic-57e76d42-a073-4078-9902-a5f27cdffccf-lshq8]: "my-hostname-basic-57e76d42-a073-4078-9902-a5f27cdffccf-lshq8", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:47:35.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3202" for this suite. • [SLOW TEST:10.153 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":244,"skipped":4260,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:47:35.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:47:52.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6008" for this suite. • [SLOW TEST:17.109 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":245,"skipped":4267,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:47:52.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 20 00:47:52.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5313' Apr 20 00:47:53.280: INFO: stderr: "" Apr 20 00:47:53.280: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 20 00:47:53.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5313' Apr 20 00:47:53.392: INFO: stderr: "" Apr 20 00:47:53.392: INFO: stdout: "update-demo-nautilus-6gtrm update-demo-nautilus-bqv9j " Apr 20 00:47:53.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6gtrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5313' Apr 20 00:47:53.485: INFO: stderr: "" Apr 20 00:47:53.485: INFO: stdout: "" Apr 20 00:47:53.485: INFO: update-demo-nautilus-6gtrm is created but not running Apr 20 00:47:58.485: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5313' Apr 20 00:47:58.595: INFO: stderr: "" Apr 20 00:47:58.595: INFO: stdout: "update-demo-nautilus-6gtrm update-demo-nautilus-bqv9j " Apr 20 00:47:58.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6gtrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5313' Apr 20 00:47:58.723: INFO: stderr: "" Apr 20 00:47:58.724: INFO: stdout: "true" Apr 20 00:47:58.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6gtrm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5313' Apr 20 00:47:58.823: INFO: stderr: "" Apr 20 00:47:58.823: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 20 00:47:58.823: INFO: validating pod update-demo-nautilus-6gtrm Apr 20 00:47:58.839: INFO: got data: { "image": "nautilus.jpg" } Apr 20 00:47:58.839: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 20 00:47:58.839: INFO: update-demo-nautilus-6gtrm is verified up and running Apr 20 00:47:58.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bqv9j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5313' Apr 20 00:47:58.932: INFO: stderr: "" Apr 20 00:47:58.933: INFO: stdout: "true" Apr 20 00:47:58.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bqv9j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5313' Apr 20 00:47:59.027: INFO: stderr: "" Apr 20 00:47:59.027: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 20 00:47:59.027: INFO: validating pod update-demo-nautilus-bqv9j Apr 20 00:47:59.031: INFO: got data: { "image": "nautilus.jpg" } Apr 20 00:47:59.031: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 20 00:47:59.031: INFO: update-demo-nautilus-bqv9j is verified up and running STEP: scaling down the replication controller Apr 20 00:47:59.032: INFO: scanned /root for discovery docs: Apr 20 00:47:59.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5313' Apr 20 00:48:00.148: INFO: stderr: "" Apr 20 00:48:00.148: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 20 00:48:00.148: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5313' Apr 20 00:48:00.249: INFO: stderr: "" Apr 20 00:48:00.249: INFO: stdout: "update-demo-nautilus-6gtrm update-demo-nautilus-bqv9j " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 20 00:48:05.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5313' Apr 20 00:48:05.359: INFO: stderr: "" Apr 20 00:48:05.359: INFO: stdout: "update-demo-nautilus-6gtrm update-demo-nautilus-bqv9j " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 20 00:48:10.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5313' Apr 20 00:48:10.454: INFO: stderr: "" Apr 20 00:48:10.454: INFO: stdout: "update-demo-nautilus-6gtrm update-demo-nautilus-bqv9j " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 20 00:48:15.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5313' Apr 20 00:48:15.543: INFO: stderr: "" Apr 20 00:48:15.543: INFO: stdout: "update-demo-nautilus-6gtrm " Apr 20 00:48:15.543: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6gtrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5313' Apr 20 00:48:15.635: INFO: stderr: "" Apr 20 00:48:15.635: INFO: stdout: "true" Apr 20 00:48:15.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6gtrm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5313' Apr 20 00:48:15.725: INFO: stderr: "" Apr 20 00:48:15.725: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 20 00:48:15.725: INFO: validating pod update-demo-nautilus-6gtrm Apr 20 00:48:15.728: INFO: got data: { "image": "nautilus.jpg" } Apr 20 00:48:15.728: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 20 00:48:15.729: INFO: update-demo-nautilus-6gtrm is verified up and running STEP: scaling up the replication controller Apr 20 00:48:15.731: INFO: scanned /root for discovery docs: Apr 20 00:48:15.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5313' Apr 20 00:48:16.892: INFO: stderr: "" Apr 20 00:48:16.892: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 20 00:48:16.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5313' Apr 20 00:48:16.996: INFO: stderr: "" Apr 20 00:48:16.996: INFO: stdout: "update-demo-nautilus-6gtrm update-demo-nautilus-d7l7f " Apr 20 00:48:16.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6gtrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5313' Apr 20 00:48:17.112: INFO: stderr: "" Apr 20 00:48:17.112: INFO: stdout: "true" Apr 20 00:48:17.112: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6gtrm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5313' Apr 20 00:48:17.264: INFO: stderr: "" Apr 20 00:48:17.264: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 20 00:48:17.264: INFO: validating pod update-demo-nautilus-6gtrm Apr 20 00:48:17.268: INFO: got data: { "image": "nautilus.jpg" } Apr 20 00:48:17.268: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 20 00:48:17.268: INFO: update-demo-nautilus-6gtrm is verified up and running Apr 20 00:48:17.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d7l7f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5313' Apr 20 00:48:17.359: INFO: stderr: "" Apr 20 00:48:17.360: INFO: stdout: "" Apr 20 00:48:17.360: INFO: update-demo-nautilus-d7l7f is created but not running Apr 20 00:48:22.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5313' Apr 20 00:48:22.459: INFO: stderr: "" Apr 20 00:48:22.459: INFO: stdout: "update-demo-nautilus-6gtrm update-demo-nautilus-d7l7f " Apr 20 00:48:22.459: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6gtrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5313' Apr 20 00:48:22.552: INFO: stderr: "" Apr 20 00:48:22.552: INFO: stdout: "true" Apr 20 00:48:22.552: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6gtrm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5313' Apr 20 00:48:22.655: INFO: stderr: "" Apr 20 00:48:22.655: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 20 00:48:22.655: INFO: validating pod update-demo-nautilus-6gtrm Apr 20 00:48:22.658: INFO: got data: { "image": "nautilus.jpg" } Apr 20 00:48:22.658: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 20 00:48:22.658: INFO: update-demo-nautilus-6gtrm is verified up and running Apr 20 00:48:22.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d7l7f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5313' Apr 20 00:48:22.798: INFO: stderr: "" Apr 20 00:48:22.798: INFO: stdout: "true" Apr 20 00:48:22.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d7l7f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5313' Apr 20 00:48:22.903: INFO: stderr: "" Apr 20 00:48:22.903: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 20 00:48:22.903: INFO: validating pod update-demo-nautilus-d7l7f Apr 20 00:48:22.907: INFO: got data: { "image": "nautilus.jpg" } Apr 20 00:48:22.907: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 20 00:48:22.907: INFO: update-demo-nautilus-d7l7f is verified up and running STEP: using delete to clean up resources Apr 20 00:48:22.907: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5313' Apr 20 00:48:23.017: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 20 00:48:23.017: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 20 00:48:23.017: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5313' Apr 20 00:48:23.121: INFO: stderr: "No resources found in kubectl-5313 namespace.\n" Apr 20 00:48:23.121: INFO: stdout: "" Apr 20 00:48:23.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5313 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 20 00:48:23.245: INFO: stderr: "" Apr 20 00:48:23.245: INFO: stdout: "update-demo-nautilus-6gtrm\nupdate-demo-nautilus-d7l7f\n" Apr 20 00:48:23.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5313' Apr 20 00:48:23.840: INFO: stderr: "No resources found in kubectl-5313 namespace.\n" Apr 20 00:48:23.840: INFO: stdout: "" Apr 20 00:48:23.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5313 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 20 00:48:23.932: INFO: stderr: "" Apr 20 00:48:23.932: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:48:23.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5313" for this suite. • [SLOW TEST:30.999 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":246,"skipped":4268,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:48:23.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 20 00:48:24.037: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1403ca2c-4655-41fd-bb54-3b9902af5c9c" in namespace "projected-859" to be "Succeeded or Failed" Apr 20 00:48:24.040: INFO: Pod "downwardapi-volume-1403ca2c-4655-41fd-bb54-3b9902af5c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.112258ms Apr 20 00:48:26.045: INFO: Pod "downwardapi-volume-1403ca2c-4655-41fd-bb54-3b9902af5c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007617746s Apr 20 00:48:28.049: INFO: Pod "downwardapi-volume-1403ca2c-4655-41fd-bb54-3b9902af5c9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012084149s STEP: Saw pod success Apr 20 00:48:28.049: INFO: Pod "downwardapi-volume-1403ca2c-4655-41fd-bb54-3b9902af5c9c" satisfied condition "Succeeded or Failed" Apr 20 00:48:28.052: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1403ca2c-4655-41fd-bb54-3b9902af5c9c container client-container: STEP: delete the pod Apr 20 00:48:28.077: INFO: Waiting for pod downwardapi-volume-1403ca2c-4655-41fd-bb54-3b9902af5c9c to disappear Apr 20 00:48:28.082: INFO: Pod downwardapi-volume-1403ca2c-4655-41fd-bb54-3b9902af5c9c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:48:28.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-859" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4277,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:48:28.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:48:28.138: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8250 I0420 00:48:28.178126 8 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8250, replica count: 1 I0420 00:48:29.228599 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 00:48:30.228828 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 00:48:31.229088 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 20 00:48:31.357: INFO: Created: latency-svc-4sjj7 Apr 20 00:48:31.390: INFO: Got endpoints: latency-svc-4sjj7 [61.014169ms] Apr 20 00:48:31.423: INFO: Created: latency-svc-7mbpw Apr 20 00:48:31.454: INFO: Got endpoints: latency-svc-7mbpw [63.638177ms] Apr 20 00:48:31.483: INFO: Created: latency-svc-9mmjz Apr 20 00:48:31.557: INFO: Got endpoints: latency-svc-9mmjz [167.109728ms] Apr 20 00:48:31.585: INFO: Created: latency-svc-ntf6k Apr 20 00:48:31.600: INFO: Got endpoints: latency-svc-ntf6k [210.391206ms] Apr 20 00:48:31.621: INFO: Created: latency-svc-k6cgd Apr 20 00:48:31.636: INFO: Got endpoints: latency-svc-k6cgd [245.780923ms] Apr 20 00:48:31.657: INFO: Created: latency-svc-cwk7r Apr 20 00:48:31.713: INFO: Got endpoints: latency-svc-cwk7r [323.109066ms] Apr 20 00:48:31.731: INFO: Created: latency-svc-jgcxp Apr 20 00:48:31.756: INFO: Got endpoints: latency-svc-jgcxp [366.067869ms] Apr 20 00:48:31.783: INFO: Created: latency-svc-nqxpp Apr 20 00:48:31.802: INFO: Got endpoints: latency-svc-nqxpp [411.718662ms] Apr 20 00:48:31.846: INFO: Created: latency-svc-gjl9k Apr 20 00:48:31.861: INFO: Got endpoints: latency-svc-gjl9k [471.061643ms] Apr 20 00:48:31.878: INFO: Created: latency-svc-9c4d2 Apr 20 00:48:31.892: INFO: Got endpoints: latency-svc-9c4d2 [501.605935ms] Apr 20 00:48:31.908: INFO: Created: latency-svc-bkvqh Apr 20 00:48:31.921: INFO: Got endpoints: latency-svc-bkvqh [531.340737ms] Apr 20 00:48:31.976: INFO: Created: latency-svc-fm2bl Apr 20 00:48:31.993: INFO: Created: latency-svc-6rvjk Apr 20 00:48:31.993: INFO: Got endpoints: latency-svc-fm2bl [603.082288ms] Apr 20 00:48:32.035: INFO: Got endpoints: latency-svc-6rvjk [644.449657ms] Apr 20 00:48:32.065: INFO: Created: latency-svc-xfnfp Apr 20 00:48:32.096: INFO: Got endpoints: latency-svc-xfnfp [705.359348ms] Apr 20 00:48:32.106: INFO: Created: latency-svc-6j6fw Apr 20 00:48:32.119: INFO: Got endpoints: latency-svc-6j6fw [728.936215ms] Apr 20 00:48:32.149: INFO: Created: latency-svc-q9cwb Apr 20 00:48:32.163: INFO: Got endpoints: latency-svc-q9cwb [772.89419ms] Apr 20 00:48:32.240: INFO: Created: latency-svc-6rtxl Apr 20 00:48:32.253: INFO: Got endpoints: latency-svc-6rtxl [799.316937ms] Apr 20 00:48:32.311: INFO: Created: latency-svc-99rsg Apr 20 00:48:32.325: INFO: Got endpoints: latency-svc-99rsg [768.081839ms] Apr 20 00:48:32.376: INFO: Created: latency-svc-m5gj5 Apr 20 00:48:32.415: INFO: Got endpoints: latency-svc-m5gj5 [814.701405ms] Apr 20 00:48:32.436: INFO: Created: latency-svc-sncqp Apr 20 00:48:32.451: INFO: Got endpoints: latency-svc-sncqp [814.664431ms] Apr 20 00:48:32.491: INFO: Created: latency-svc-zs9cz Apr 20 00:48:32.515: INFO: Got endpoints: latency-svc-zs9cz [801.663839ms] Apr 20 00:48:32.516: INFO: Created: latency-svc-wkw56 Apr 20 00:48:32.529: INFO: Got endpoints: latency-svc-wkw56 [772.920238ms] Apr 20 00:48:32.551: INFO: Created: latency-svc-22kbb Apr 20 00:48:32.568: INFO: Got endpoints: latency-svc-22kbb [766.398395ms] Apr 20 00:48:32.587: INFO: Created: latency-svc-hjfgq Apr 20 00:48:32.641: INFO: Got endpoints: latency-svc-hjfgq [779.860183ms] Apr 20 00:48:32.658: INFO: Created: latency-svc-n8xrr Apr 20 00:48:32.671: INFO: Got endpoints: latency-svc-n8xrr [778.471906ms] Apr 20 00:48:32.688: INFO: Created: latency-svc-j6gzm Apr 20 00:48:32.700: INFO: Got endpoints: latency-svc-j6gzm [778.757346ms] Apr 20 00:48:32.730: INFO: Created: latency-svc-zlcc5 Apr 20 00:48:32.773: INFO: Got endpoints: latency-svc-zlcc5 [779.53729ms] Apr 20 00:48:32.804: INFO: Created: latency-svc-7jdql Apr 20 00:48:32.820: INFO: Got endpoints: latency-svc-7jdql [785.246231ms] Apr 20 00:48:32.862: INFO: Created: latency-svc-dxtf9 Apr 20 00:48:32.922: INFO: Got endpoints: latency-svc-dxtf9 [826.375681ms] Apr 20 00:48:32.925: INFO: Created: latency-svc-tb2br Apr 20 00:48:32.952: INFO: Got endpoints: latency-svc-tb2br [832.783558ms] Apr 20 00:48:32.976: INFO: Created: latency-svc-qd2fg Apr 20 00:48:33.002: INFO: Got endpoints: latency-svc-qd2fg [839.028329ms] Apr 20 00:48:33.048: INFO: Created: latency-svc-4d72p Apr 20 00:48:33.067: INFO: Created: latency-svc-wbng6 Apr 20 00:48:33.067: INFO: Got endpoints: latency-svc-4d72p [813.473758ms] Apr 20 00:48:33.080: INFO: Got endpoints: latency-svc-wbng6 [754.349405ms] Apr 20 00:48:33.102: INFO: Created: latency-svc-z7hmh Apr 20 00:48:33.116: INFO: Got endpoints: latency-svc-z7hmh [700.5758ms] Apr 20 00:48:33.138: INFO: Created: latency-svc-s2crm Apr 20 00:48:33.168: INFO: Got endpoints: latency-svc-s2crm [717.107996ms] Apr 20 00:48:33.216: INFO: Created: latency-svc-c656d Apr 20 00:48:33.227: INFO: Got endpoints: latency-svc-c656d [712.296817ms] Apr 20 00:48:33.264: INFO: Created: latency-svc-k52kj Apr 20 00:48:33.296: INFO: Got endpoints: latency-svc-k52kj [767.009927ms] Apr 20 00:48:33.306: INFO: Created: latency-svc-lbz52 Apr 20 00:48:33.323: INFO: Got endpoints: latency-svc-lbz52 [754.819592ms] Apr 20 00:48:33.351: INFO: Created: latency-svc-k24rg Apr 20 00:48:33.378: INFO: Got endpoints: latency-svc-k24rg [736.700744ms] Apr 20 00:48:33.420: INFO: Created: latency-svc-rzc9x Apr 20 00:48:33.431: INFO: Got endpoints: latency-svc-rzc9x [760.633462ms] Apr 20 00:48:33.451: INFO: Created: latency-svc-wnjnv Apr 20 00:48:33.462: INFO: Got endpoints: latency-svc-wnjnv [761.29863ms] Apr 20 00:48:33.486: INFO: Created: latency-svc-mtdzk Apr 20 00:48:33.503: INFO: Got endpoints: latency-svc-mtdzk [730.354932ms] Apr 20 00:48:33.551: INFO: Created: latency-svc-sctrs Apr 20 00:48:33.559: INFO: Got endpoints: latency-svc-sctrs [738.600671ms] Apr 20 00:48:33.583: INFO: Created: latency-svc-gtx6f Apr 20 00:48:33.595: INFO: Got endpoints: latency-svc-gtx6f [672.327519ms] Apr 20 00:48:33.638: INFO: Created: latency-svc-t6l67 Apr 20 00:48:33.689: INFO: Got endpoints: latency-svc-t6l67 [736.207162ms] Apr 20 00:48:33.714: INFO: Created: latency-svc-xkgqp Apr 20 00:48:33.727: INFO: Got endpoints: latency-svc-xkgqp [724.793025ms] Apr 20 00:48:33.780: INFO: Created: latency-svc-n8vdj Apr 20 00:48:33.809: INFO: Got endpoints: latency-svc-n8vdj [742.311391ms] Apr 20 00:48:33.822: INFO: Created: latency-svc-kfpx4 Apr 20 00:48:33.835: INFO: Got endpoints: latency-svc-kfpx4 [755.5222ms] Apr 20 00:48:33.858: INFO: Created: latency-svc-cljxz Apr 20 00:48:33.871: INFO: Got endpoints: latency-svc-cljxz [754.591675ms] Apr 20 00:48:33.888: INFO: Created: latency-svc-dvmd9 Apr 20 00:48:33.905: INFO: Got endpoints: latency-svc-dvmd9 [736.540827ms] Apr 20 00:48:33.943: INFO: Created: latency-svc-87q2g Apr 20 00:48:33.959: INFO: Got endpoints: latency-svc-87q2g [731.287033ms] Apr 20 00:48:33.984: INFO: Created: latency-svc-m8j7k Apr 20 00:48:34.001: INFO: Got endpoints: latency-svc-m8j7k [704.461519ms] Apr 20 00:48:34.048: INFO: Created: latency-svc-7xw4x Apr 20 00:48:34.054: INFO: Got endpoints: latency-svc-7xw4x [731.166049ms] Apr 20 00:48:34.098: INFO: Created: latency-svc-zmnk8 Apr 20 00:48:34.109: INFO: Got endpoints: latency-svc-zmnk8 [730.426167ms] Apr 20 00:48:34.122: INFO: Created: latency-svc-chpck Apr 20 00:48:34.133: INFO: Got endpoints: latency-svc-chpck [701.666323ms] Apr 20 00:48:34.187: INFO: Created: latency-svc-ftm44 Apr 20 00:48:34.219: INFO: Got endpoints: latency-svc-ftm44 [756.821035ms] Apr 20 00:48:34.220: INFO: Created: latency-svc-kmz4n Apr 20 00:48:34.242: INFO: Got endpoints: latency-svc-kmz4n [738.636441ms] Apr 20 00:48:34.272: INFO: Created: latency-svc-7h29c Apr 20 00:48:34.284: INFO: Got endpoints: latency-svc-7h29c [725.301074ms] Apr 20 00:48:34.342: INFO: Created: latency-svc-glkpk Apr 20 00:48:34.368: INFO: Got endpoints: latency-svc-glkpk [772.755782ms] Apr 20 00:48:34.368: INFO: Created: latency-svc-vq8wm Apr 20 00:48:34.380: INFO: Got endpoints: latency-svc-vq8wm [691.495785ms] Apr 20 00:48:34.486: INFO: Created: latency-svc-knv2l Apr 20 00:48:34.506: INFO: Got endpoints: latency-svc-knv2l [778.450507ms] Apr 20 00:48:34.507: INFO: Created: latency-svc-tdwcf Apr 20 00:48:34.518: INFO: Got endpoints: latency-svc-tdwcf [708.882131ms] Apr 20 00:48:34.537: INFO: Created: latency-svc-f579x Apr 20 00:48:34.548: INFO: Got endpoints: latency-svc-f579x [712.690498ms] Apr 20 00:48:34.571: INFO: Created: latency-svc-77lxv Apr 20 00:48:34.617: INFO: Got endpoints: latency-svc-77lxv [746.730464ms] Apr 20 00:48:34.632: INFO: Created: latency-svc-4w9zx Apr 20 00:48:34.641: INFO: Got endpoints: latency-svc-4w9zx [736.489137ms] Apr 20 00:48:34.656: INFO: Created: latency-svc-9gzww Apr 20 00:48:34.743: INFO: Got endpoints: latency-svc-9gzww [784.045472ms] Apr 20 00:48:34.758: INFO: Created: latency-svc-5m6qs Apr 20 00:48:34.773: INFO: Got endpoints: latency-svc-5m6qs [772.302779ms] Apr 20 00:48:34.795: INFO: Created: latency-svc-qhfms Apr 20 00:48:34.810: INFO: Got endpoints: latency-svc-qhfms [755.397883ms] Apr 20 00:48:34.823: INFO: Created: latency-svc-78959 Apr 20 00:48:34.834: INFO: Got endpoints: latency-svc-78959 [725.327304ms] Apr 20 00:48:34.886: INFO: Created: latency-svc-wbzld Apr 20 00:48:34.893: INFO: Got endpoints: latency-svc-wbzld [759.855789ms] Apr 20 00:48:34.920: INFO: Created: latency-svc-n44r4 Apr 20 00:48:34.938: INFO: Got endpoints: latency-svc-n44r4 [719.275615ms] Apr 20 00:48:34.962: INFO: Created: latency-svc-c9zhh Apr 20 00:48:34.979: INFO: Got endpoints: latency-svc-c9zhh [736.797233ms] Apr 20 00:48:35.024: INFO: Created: latency-svc-5wll2 Apr 20 00:48:35.039: INFO: Got endpoints: latency-svc-5wll2 [754.635165ms] Apr 20 00:48:35.075: INFO: Created: latency-svc-rkfdn Apr 20 00:48:35.099: INFO: Got endpoints: latency-svc-rkfdn [731.821256ms] Apr 20 00:48:35.123: INFO: Created: latency-svc-t7h5x Apr 20 00:48:35.155: INFO: Got endpoints: latency-svc-t7h5x [775.232267ms] Apr 20 00:48:35.184: INFO: Created: latency-svc-2c7vk Apr 20 00:48:35.195: INFO: Got endpoints: latency-svc-2c7vk [689.114715ms] Apr 20 00:48:35.226: INFO: Created: latency-svc-8vmrd Apr 20 00:48:35.237: INFO: Got endpoints: latency-svc-8vmrd [718.609608ms] Apr 20 00:48:35.309: INFO: Created: latency-svc-gnrfr Apr 20 00:48:35.318: INFO: Got endpoints: latency-svc-gnrfr [770.227518ms] Apr 20 00:48:35.339: INFO: Created: latency-svc-nx498 Apr 20 00:48:35.355: INFO: Got endpoints: latency-svc-nx498 [737.31647ms] Apr 20 00:48:35.381: INFO: Created: latency-svc-s2ff2 Apr 20 00:48:35.413: INFO: Got endpoints: latency-svc-s2ff2 [771.554331ms] Apr 20 00:48:35.440: INFO: Created: latency-svc-tbqrz Apr 20 00:48:35.466: INFO: Got endpoints: latency-svc-tbqrz [722.793898ms] Apr 20 00:48:35.540: INFO: Created: latency-svc-krdxp Apr 20 00:48:35.567: INFO: Got endpoints: latency-svc-krdxp [793.7304ms] Apr 20 00:48:35.567: INFO: Created: latency-svc-8hn8l Apr 20 00:48:35.582: INFO: Got endpoints: latency-svc-8hn8l [772.046768ms] Apr 20 00:48:35.616: INFO: Created: latency-svc-znmmd Apr 20 00:48:35.630: INFO: Got endpoints: latency-svc-znmmd [795.877481ms] Apr 20 00:48:35.670: INFO: Created: latency-svc-m5jjc Apr 20 00:48:35.674: INFO: Got endpoints: latency-svc-m5jjc [780.876931ms] Apr 20 00:48:35.688: INFO: Created: latency-svc-n4xbh Apr 20 00:48:35.698: INFO: Got endpoints: latency-svc-n4xbh [760.561154ms] Apr 20 00:48:35.711: INFO: Created: latency-svc-4b9qm Apr 20 00:48:35.722: INFO: Got endpoints: latency-svc-4b9qm [743.130716ms] Apr 20 00:48:35.741: INFO: Created: latency-svc-k9n54 Apr 20 00:48:35.762: INFO: Got endpoints: latency-svc-k9n54 [723.26556ms] Apr 20 00:48:35.809: INFO: Created: latency-svc-pdp7t Apr 20 00:48:35.831: INFO: Got endpoints: latency-svc-pdp7t [731.336973ms] Apr 20 00:48:35.831: INFO: Created: latency-svc-q8ppl Apr 20 00:48:35.861: INFO: Got endpoints: latency-svc-q8ppl [705.814039ms] Apr 20 00:48:35.899: INFO: Created: latency-svc-ps66d Apr 20 00:48:35.934: INFO: Got endpoints: latency-svc-ps66d [739.110398ms] Apr 20 00:48:35.958: INFO: Created: latency-svc-gstp4 Apr 20 00:48:35.971: INFO: Got endpoints: latency-svc-gstp4 [734.159551ms] Apr 20 00:48:36.000: INFO: Created: latency-svc-ltckg Apr 20 00:48:36.013: INFO: Got endpoints: latency-svc-ltckg [695.183335ms] Apr 20 00:48:36.060: INFO: Created: latency-svc-n4qzf Apr 20 00:48:36.083: INFO: Created: latency-svc-zhprb Apr 20 00:48:36.084: INFO: Got endpoints: latency-svc-n4qzf [729.014265ms] Apr 20 00:48:36.107: INFO: Got endpoints: latency-svc-zhprb [693.697309ms] Apr 20 00:48:36.131: INFO: Created: latency-svc-4zw54 Apr 20 00:48:36.145: INFO: Got endpoints: latency-svc-4zw54 [679.437754ms] Apr 20 00:48:36.192: INFO: Created: latency-svc-kt4xf Apr 20 00:48:36.227: INFO: Got endpoints: latency-svc-kt4xf [660.154076ms] Apr 20 00:48:36.228: INFO: Created: latency-svc-njqk4 Apr 20 00:48:36.270: INFO: Got endpoints: latency-svc-njqk4 [687.459107ms] Apr 20 00:48:36.336: INFO: Created: latency-svc-svbf5 Apr 20 00:48:36.365: INFO: Created: latency-svc-zp46m Apr 20 00:48:36.365: INFO: Got endpoints: latency-svc-svbf5 [734.867767ms] Apr 20 00:48:36.387: INFO: Got endpoints: latency-svc-zp46m [713.246205ms] Apr 20 00:48:36.419: INFO: Created: latency-svc-lfr6f Apr 20 00:48:36.473: INFO: Got endpoints: latency-svc-lfr6f [774.669899ms] Apr 20 00:48:36.491: INFO: Created: latency-svc-pfj9c Apr 20 00:48:36.507: INFO: Got endpoints: latency-svc-pfj9c [784.858898ms] Apr 20 00:48:36.521: INFO: Created: latency-svc-gzxkz Apr 20 00:48:36.531: INFO: Got endpoints: latency-svc-gzxkz [768.353382ms] Apr 20 00:48:36.545: INFO: Created: latency-svc-j75hn Apr 20 00:48:36.563: INFO: Got endpoints: latency-svc-j75hn [732.344636ms] Apr 20 00:48:36.611: INFO: Created: latency-svc-vkrxn Apr 20 00:48:36.635: INFO: Got endpoints: latency-svc-vkrxn [773.443779ms] Apr 20 00:48:36.636: INFO: Created: latency-svc-gwt8d Apr 20 00:48:36.648: INFO: Got endpoints: latency-svc-gwt8d [714.060276ms] Apr 20 00:48:36.664: INFO: Created: latency-svc-cw62n Apr 20 00:48:36.689: INFO: Got endpoints: latency-svc-cw62n [718.443161ms] Apr 20 00:48:36.737: INFO: Created: latency-svc-6rsnb Apr 20 00:48:36.762: INFO: Created: latency-svc-8tcsx Apr 20 00:48:36.762: INFO: Got endpoints: latency-svc-6rsnb [748.445281ms] Apr 20 00:48:36.775: INFO: Got endpoints: latency-svc-8tcsx [690.825133ms] Apr 20 00:48:36.796: INFO: Created: latency-svc-9l5ct Apr 20 00:48:36.833: INFO: Got endpoints: latency-svc-9l5ct [726.273146ms] Apr 20 00:48:36.875: INFO: Created: latency-svc-wzxvt Apr 20 00:48:36.888: INFO: Got endpoints: latency-svc-wzxvt [743.136312ms] Apr 20 00:48:36.911: INFO: Created: latency-svc-smxc2 Apr 20 00:48:36.926: INFO: Got endpoints: latency-svc-smxc2 [699.20486ms] Apr 20 00:48:36.959: INFO: Created: latency-svc-8j2th Apr 20 00:48:36.994: INFO: Got endpoints: latency-svc-8j2th [724.422744ms] Apr 20 00:48:37.000: INFO: Created: latency-svc-g8kpx Apr 20 00:48:37.016: INFO: Got endpoints: latency-svc-g8kpx [650.984787ms] Apr 20 00:48:37.043: INFO: Created: latency-svc-grjsk Apr 20 00:48:37.079: INFO: Got endpoints: latency-svc-grjsk [691.865371ms] Apr 20 00:48:37.128: INFO: Created: latency-svc-fgr9x Apr 20 00:48:37.146: INFO: Got endpoints: latency-svc-fgr9x [672.744133ms] Apr 20 00:48:37.157: INFO: Created: latency-svc-6vds9 Apr 20 00:48:37.166: INFO: Got endpoints: latency-svc-6vds9 [659.037481ms] Apr 20 00:48:37.181: INFO: Created: latency-svc-v9pb7 Apr 20 00:48:37.190: INFO: Got endpoints: latency-svc-v9pb7 [659.497314ms] Apr 20 00:48:37.258: INFO: Created: latency-svc-729zj Apr 20 00:48:37.276: INFO: Got endpoints: latency-svc-729zj [713.151396ms] Apr 20 00:48:37.277: INFO: Created: latency-svc-hlndg Apr 20 00:48:37.290: INFO: Got endpoints: latency-svc-hlndg [655.340105ms] Apr 20 00:48:37.306: INFO: Created: latency-svc-cxrfn Apr 20 00:48:37.319: INFO: Got endpoints: latency-svc-cxrfn [670.86706ms] Apr 20 00:48:37.337: INFO: Created: latency-svc-kzfbj Apr 20 00:48:37.355: INFO: Got endpoints: latency-svc-kzfbj [665.762503ms] Apr 20 00:48:37.401: INFO: Created: latency-svc-sqkf6 Apr 20 00:48:37.415: INFO: Created: latency-svc-h5dwq Apr 20 00:48:37.415: INFO: Got endpoints: latency-svc-sqkf6 [652.787716ms] Apr 20 00:48:37.427: INFO: Got endpoints: latency-svc-h5dwq [652.611008ms] Apr 20 00:48:37.445: INFO: Created: latency-svc-4ds8f Apr 20 00:48:37.458: INFO: Got endpoints: latency-svc-4ds8f [624.609902ms] Apr 20 00:48:37.481: INFO: Created: latency-svc-jtphq Apr 20 00:48:37.545: INFO: Got endpoints: latency-svc-jtphq [656.700318ms] Apr 20 00:48:37.547: INFO: Created: latency-svc-mz5qh Apr 20 00:48:37.555: INFO: Got endpoints: latency-svc-mz5qh [628.494442ms] Apr 20 00:48:37.594: INFO: Created: latency-svc-9zx26 Apr 20 00:48:37.615: INFO: Got endpoints: latency-svc-9zx26 [621.219077ms] Apr 20 00:48:37.678: INFO: Created: latency-svc-rdbwb Apr 20 00:48:37.681: INFO: Got endpoints: latency-svc-rdbwb [665.117257ms] Apr 20 00:48:37.697: INFO: Created: latency-svc-5mgrs Apr 20 00:48:37.705: INFO: Got endpoints: latency-svc-5mgrs [626.07192ms] Apr 20 00:48:37.733: INFO: Created: latency-svc-8g2gg Apr 20 00:48:37.753: INFO: Got endpoints: latency-svc-8g2gg [607.068574ms] Apr 20 00:48:37.814: INFO: Created: latency-svc-h984m Apr 20 00:48:37.819: INFO: Got endpoints: latency-svc-h984m [652.858795ms] Apr 20 00:48:37.846: INFO: Created: latency-svc-2t2fq Apr 20 00:48:37.858: INFO: Got endpoints: latency-svc-2t2fq [668.065929ms] Apr 20 00:48:37.888: INFO: Created: latency-svc-jmccl Apr 20 00:48:37.946: INFO: Got endpoints: latency-svc-jmccl [669.635856ms] Apr 20 00:48:37.955: INFO: Created: latency-svc-lbf4d Apr 20 00:48:37.972: INFO: Got endpoints: latency-svc-lbf4d [682.257621ms] Apr 20 00:48:37.996: INFO: Created: latency-svc-n6ssd Apr 20 00:48:38.020: INFO: Got endpoints: latency-svc-n6ssd [700.750951ms] Apr 20 00:48:38.044: INFO: Created: latency-svc-lwcms Apr 20 00:48:38.066: INFO: Got endpoints: latency-svc-lwcms [710.302183ms] Apr 20 00:48:38.080: INFO: Created: latency-svc-5btg2 Apr 20 00:48:38.092: INFO: Got endpoints: latency-svc-5btg2 [677.497963ms] Apr 20 00:48:38.117: INFO: Created: latency-svc-whbk8 Apr 20 00:48:38.130: INFO: Got endpoints: latency-svc-whbk8 [702.961684ms] Apr 20 00:48:38.146: INFO: Created: latency-svc-67jhr Apr 20 00:48:38.222: INFO: Got endpoints: latency-svc-67jhr [764.169706ms] Apr 20 00:48:38.237: INFO: Created: latency-svc-89kqw Apr 20 00:48:38.250: INFO: Got endpoints: latency-svc-89kqw [704.839114ms] Apr 20 00:48:38.272: INFO: Created: latency-svc-tt8f7 Apr 20 00:48:38.286: INFO: Got endpoints: latency-svc-tt8f7 [731.329719ms] Apr 20 00:48:38.308: INFO: Created: latency-svc-6lsnm Apr 20 00:48:38.354: INFO: Got endpoints: latency-svc-6lsnm [738.47044ms] Apr 20 00:48:38.380: INFO: Created: latency-svc-7rmd5 Apr 20 00:48:38.394: INFO: Got endpoints: latency-svc-7rmd5 [713.11506ms] Apr 20 00:48:38.428: INFO: Created: latency-svc-mg8fq Apr 20 00:48:38.442: INFO: Got endpoints: latency-svc-mg8fq [737.311453ms] Apr 20 00:48:38.491: INFO: Created: latency-svc-bcrxv Apr 20 00:48:38.494: INFO: Got endpoints: latency-svc-bcrxv [740.522025ms] Apr 20 00:48:38.512: INFO: Created: latency-svc-n6x4s Apr 20 00:48:38.530: INFO: Got endpoints: latency-svc-n6x4s [710.66799ms] Apr 20 00:48:38.560: INFO: Created: latency-svc-k8h6l Apr 20 00:48:38.578: INFO: Got endpoints: latency-svc-k8h6l [719.449372ms] Apr 20 00:48:38.617: INFO: Created: latency-svc-qr4rj Apr 20 00:48:38.643: INFO: Created: latency-svc-fdlvx Apr 20 00:48:38.643: INFO: Got endpoints: latency-svc-qr4rj [696.810538ms] Apr 20 00:48:38.650: INFO: Got endpoints: latency-svc-fdlvx [677.294972ms] Apr 20 00:48:38.668: INFO: Created: latency-svc-h45xp Apr 20 00:48:38.692: INFO: Got endpoints: latency-svc-h45xp [672.420664ms] Apr 20 00:48:38.748: INFO: Created: latency-svc-lr9vh Apr 20 00:48:38.777: INFO: Created: latency-svc-p9x6j Apr 20 00:48:38.777: INFO: Got endpoints: latency-svc-lr9vh [711.101615ms] Apr 20 00:48:38.796: INFO: Got endpoints: latency-svc-p9x6j [703.094333ms] Apr 20 00:48:38.837: INFO: Created: latency-svc-njqmx Apr 20 00:48:38.910: INFO: Got endpoints: latency-svc-njqmx [779.560292ms] Apr 20 00:48:38.932: INFO: Created: latency-svc-7ds9l Apr 20 00:48:38.945: INFO: Got endpoints: latency-svc-7ds9l [723.250429ms] Apr 20 00:48:38.962: INFO: Created: latency-svc-wz84r Apr 20 00:48:38.975: INFO: Got endpoints: latency-svc-wz84r [725.173107ms] Apr 20 00:48:39.004: INFO: Created: latency-svc-9pxwp Apr 20 00:48:39.036: INFO: Got endpoints: latency-svc-9pxwp [749.70849ms] Apr 20 00:48:39.052: INFO: Created: latency-svc-kpwgd Apr 20 00:48:39.065: INFO: Got endpoints: latency-svc-kpwgd [711.334016ms] Apr 20 00:48:39.088: INFO: Created: latency-svc-hkbfg Apr 20 00:48:39.101: INFO: Got endpoints: latency-svc-hkbfg [706.783603ms] Apr 20 00:48:39.124: INFO: Created: latency-svc-vrv87 Apr 20 00:48:39.135: INFO: Got endpoints: latency-svc-vrv87 [692.2863ms] Apr 20 00:48:39.167: INFO: Created: latency-svc-dpqhn Apr 20 00:48:39.190: INFO: Created: latency-svc-2ff9k Apr 20 00:48:39.190: INFO: Got endpoints: latency-svc-dpqhn [696.556704ms] Apr 20 00:48:39.207: INFO: Got endpoints: latency-svc-2ff9k [677.749528ms] Apr 20 00:48:39.232: INFO: Created: latency-svc-gz9v4 Apr 20 00:48:39.249: INFO: Got endpoints: latency-svc-gz9v4 [670.916884ms] Apr 20 00:48:39.318: INFO: Created: latency-svc-w2gr8 Apr 20 00:48:39.333: INFO: Got endpoints: latency-svc-w2gr8 [690.458714ms] Apr 20 00:48:39.334: INFO: Created: latency-svc-vq9jc Apr 20 00:48:39.351: INFO: Got endpoints: latency-svc-vq9jc [700.985632ms] Apr 20 00:48:39.363: INFO: Created: latency-svc-g66g7 Apr 20 00:48:39.381: INFO: Got endpoints: latency-svc-g66g7 [688.961049ms] Apr 20 00:48:39.406: INFO: Created: latency-svc-fj8dx Apr 20 00:48:39.465: INFO: Got endpoints: latency-svc-fj8dx [688.348631ms] Apr 20 00:48:39.467: INFO: Created: latency-svc-pbwxq Apr 20 00:48:39.472: INFO: Got endpoints: latency-svc-pbwxq [676.824822ms] Apr 20 00:48:39.496: INFO: Created: latency-svc-42jkn Apr 20 00:48:39.520: INFO: Got endpoints: latency-svc-42jkn [610.16183ms] Apr 20 00:48:39.543: INFO: Created: latency-svc-tlqr9 Apr 20 00:48:39.556: INFO: Got endpoints: latency-svc-tlqr9 [611.092858ms] Apr 20 00:48:39.606: INFO: Created: latency-svc-nr7gd Apr 20 00:48:39.627: INFO: Created: latency-svc-m65kl Apr 20 00:48:39.628: INFO: Got endpoints: latency-svc-nr7gd [652.317064ms] Apr 20 00:48:39.652: INFO: Got endpoints: latency-svc-m65kl [615.422412ms] Apr 20 00:48:39.689: INFO: Created: latency-svc-slhpr Apr 20 00:48:39.742: INFO: Got endpoints: latency-svc-slhpr [676.978367ms] Apr 20 00:48:39.754: INFO: Created: latency-svc-hc5km Apr 20 00:48:39.770: INFO: Got endpoints: latency-svc-hc5km [668.945698ms] Apr 20 00:48:39.791: INFO: Created: latency-svc-jkwkx Apr 20 00:48:39.806: INFO: Got endpoints: latency-svc-jkwkx [671.262102ms] Apr 20 00:48:39.825: INFO: Created: latency-svc-wvbgt Apr 20 00:48:39.868: INFO: Got endpoints: latency-svc-wvbgt [677.788057ms] Apr 20 00:48:39.886: INFO: Created: latency-svc-h5jfz Apr 20 00:48:39.896: INFO: Got endpoints: latency-svc-h5jfz [688.222486ms] Apr 20 00:48:39.909: INFO: Created: latency-svc-2jgrl Apr 20 00:48:39.920: INFO: Got endpoints: latency-svc-2jgrl [670.714431ms] Apr 20 00:48:39.934: INFO: Created: latency-svc-v4h89 Apr 20 00:48:39.950: INFO: Got endpoints: latency-svc-v4h89 [616.491973ms] Apr 20 00:48:39.994: INFO: Created: latency-svc-hvgqx Apr 20 00:48:39.999: INFO: Got endpoints: latency-svc-hvgqx [647.729508ms] Apr 20 00:48:40.018: INFO: Created: latency-svc-w5dvd Apr 20 00:48:40.030: INFO: Got endpoints: latency-svc-w5dvd [648.057134ms] Apr 20 00:48:40.049: INFO: Created: latency-svc-gf6ck Apr 20 00:48:40.059: INFO: Got endpoints: latency-svc-gf6ck [594.229983ms] Apr 20 00:48:40.077: INFO: Created: latency-svc-fxr7x Apr 20 00:48:40.120: INFO: Got endpoints: latency-svc-fxr7x [647.324303ms] Apr 20 00:48:40.137: INFO: Created: latency-svc-92ljb Apr 20 00:48:40.257: INFO: Got endpoints: latency-svc-92ljb [737.078721ms] Apr 20 00:48:40.288: INFO: Created: latency-svc-r8tmr Apr 20 00:48:40.303: INFO: Got endpoints: latency-svc-r8tmr [746.435187ms] Apr 20 00:48:40.318: INFO: Created: latency-svc-c5bdl Apr 20 00:48:40.332: INFO: Got endpoints: latency-svc-c5bdl [704.483566ms] Apr 20 00:48:40.354: INFO: Created: latency-svc-7qzh5 Apr 20 00:48:40.401: INFO: Got endpoints: latency-svc-7qzh5 [749.396724ms] Apr 20 00:48:40.414: INFO: Created: latency-svc-k68zj Apr 20 00:48:40.425: INFO: Got endpoints: latency-svc-k68zj [683.033398ms] Apr 20 00:48:40.455: INFO: Created: latency-svc-b4qfw Apr 20 00:48:40.467: INFO: Got endpoints: latency-svc-b4qfw [696.594014ms] Apr 20 00:48:40.485: INFO: Created: latency-svc-447m2 Apr 20 00:48:40.533: INFO: Got endpoints: latency-svc-447m2 [726.938005ms] Apr 20 00:48:40.552: INFO: Created: latency-svc-8c8cp Apr 20 00:48:40.569: INFO: Got endpoints: latency-svc-8c8cp [700.801303ms] Apr 20 00:48:40.588: INFO: Created: latency-svc-q4zlf Apr 20 00:48:40.605: INFO: Got endpoints: latency-svc-q4zlf [709.737928ms] Apr 20 00:48:40.631: INFO: Created: latency-svc-xh9wc Apr 20 00:48:40.683: INFO: Got endpoints: latency-svc-xh9wc [763.015442ms] Apr 20 00:48:40.702: INFO: Created: latency-svc-v954v Apr 20 00:48:40.716: INFO: Got endpoints: latency-svc-v954v [766.058216ms] Apr 20 00:48:40.732: INFO: Created: latency-svc-b59sk Apr 20 00:48:40.746: INFO: Got endpoints: latency-svc-b59sk [747.358355ms] Apr 20 00:48:40.762: INFO: Created: latency-svc-49jl5 Apr 20 00:48:40.821: INFO: Got endpoints: latency-svc-49jl5 [791.556659ms] Apr 20 00:48:40.833: INFO: Created: latency-svc-9vmnl Apr 20 00:48:40.847: INFO: Got endpoints: latency-svc-9vmnl [787.58153ms] Apr 20 00:48:40.876: INFO: Created: latency-svc-ntlmv Apr 20 00:48:40.907: INFO: Got endpoints: latency-svc-ntlmv [787.480588ms] Apr 20 00:48:40.964: INFO: Created: latency-svc-4kbhn Apr 20 00:48:40.978: INFO: Got endpoints: latency-svc-4kbhn [720.489833ms] Apr 20 00:48:41.003: INFO: Created: latency-svc-bkncm Apr 20 00:48:41.015: INFO: Got endpoints: latency-svc-bkncm [712.140349ms] Apr 20 00:48:41.038: INFO: Created: latency-svc-zzjj2 Apr 20 00:48:41.090: INFO: Got endpoints: latency-svc-zzjj2 [757.574455ms] Apr 20 00:48:41.090: INFO: Latencies: [63.638177ms 167.109728ms 210.391206ms 245.780923ms 323.109066ms 366.067869ms 411.718662ms 471.061643ms 501.605935ms 531.340737ms 594.229983ms 603.082288ms 607.068574ms 610.16183ms 611.092858ms 615.422412ms 616.491973ms 621.219077ms 624.609902ms 626.07192ms 628.494442ms 644.449657ms 647.324303ms 647.729508ms 648.057134ms 650.984787ms 652.317064ms 652.611008ms 652.787716ms 652.858795ms 655.340105ms 656.700318ms 659.037481ms 659.497314ms 660.154076ms 665.117257ms 665.762503ms 668.065929ms 668.945698ms 669.635856ms 670.714431ms 670.86706ms 670.916884ms 671.262102ms 672.327519ms 672.420664ms 672.744133ms 676.824822ms 676.978367ms 677.294972ms 677.497963ms 677.749528ms 677.788057ms 679.437754ms 682.257621ms 683.033398ms 687.459107ms 688.222486ms 688.348631ms 688.961049ms 689.114715ms 690.458714ms 690.825133ms 691.495785ms 691.865371ms 692.2863ms 693.697309ms 695.183335ms 696.556704ms 696.594014ms 696.810538ms 699.20486ms 700.5758ms 700.750951ms 700.801303ms 700.985632ms 701.666323ms 702.961684ms 703.094333ms 704.461519ms 704.483566ms 704.839114ms 705.359348ms 705.814039ms 706.783603ms 708.882131ms 709.737928ms 710.302183ms 710.66799ms 711.101615ms 711.334016ms 712.140349ms 712.296817ms 712.690498ms 713.11506ms 713.151396ms 713.246205ms 714.060276ms 717.107996ms 718.443161ms 718.609608ms 719.275615ms 719.449372ms 720.489833ms 722.793898ms 723.250429ms 723.26556ms 724.422744ms 724.793025ms 725.173107ms 725.301074ms 725.327304ms 726.273146ms 726.938005ms 728.936215ms 729.014265ms 730.354932ms 730.426167ms 731.166049ms 731.287033ms 731.329719ms 731.336973ms 731.821256ms 732.344636ms 734.159551ms 734.867767ms 736.207162ms 736.489137ms 736.540827ms 736.700744ms 736.797233ms 737.078721ms 737.311453ms 737.31647ms 738.47044ms 738.600671ms 738.636441ms 739.110398ms 740.522025ms 742.311391ms 743.130716ms 743.136312ms 746.435187ms 746.730464ms 747.358355ms 748.445281ms 749.396724ms 749.70849ms 754.349405ms 754.591675ms 754.635165ms 754.819592ms 755.397883ms 755.5222ms 756.821035ms 757.574455ms 759.855789ms 760.561154ms 760.633462ms 761.29863ms 763.015442ms 764.169706ms 766.058216ms 766.398395ms 767.009927ms 768.081839ms 768.353382ms 770.227518ms 771.554331ms 772.046768ms 772.302779ms 772.755782ms 772.89419ms 772.920238ms 773.443779ms 774.669899ms 775.232267ms 778.450507ms 778.471906ms 778.757346ms 779.53729ms 779.560292ms 779.860183ms 780.876931ms 784.045472ms 784.858898ms 785.246231ms 787.480588ms 787.58153ms 791.556659ms 793.7304ms 795.877481ms 799.316937ms 801.663839ms 813.473758ms 814.664431ms 814.701405ms 826.375681ms 832.783558ms 839.028329ms] Apr 20 00:48:41.090: INFO: 50 %ile: 718.609608ms Apr 20 00:48:41.090: INFO: 90 %ile: 779.53729ms Apr 20 00:48:41.090: INFO: 99 %ile: 832.783558ms Apr 20 00:48:41.090: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:48:41.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8250" for this suite. • [SLOW TEST:13.010 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":248,"skipped":4297,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:48:41.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:48:52.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7312" for this suite. • [SLOW TEST:11.217 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":249,"skipped":4304,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:48:52.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:48:52.476: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 20 00:48:52.529: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 20 00:48:57.550: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 20 00:48:57.551: INFO: Creating deployment "test-rolling-update-deployment" Apr 20 00:48:57.581: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 20 00:48:57.628: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 20 00:48:59.730: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 20 00:48:59.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940537, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940537, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940537, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940537, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 00:49:01.916: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 20 00:49:02.016: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7425 /apis/apps/v1/namespaces/deployment-7425/deployments/test-rolling-update-deployment c6cd5f03-7133-4fa3-9f30-ffde84d04ee3 9475630 1 2020-04-20 00:48:57 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0003592e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-20 00:48:57 +0000 UTC,LastTransitionTime:2020-04-20 00:48:57 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-04-20 00:49:01 +0000 UTC,LastTransitionTime:2020-04-20 00:48:57 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 20 00:49:02.055: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-7425 /apis/apps/v1/namespaces/deployment-7425/replicasets/test-rolling-update-deployment-664dd8fc7f 84d20394-8e0f-4715-8035-b1cad4bc1b9c 9475615 1 2020-04-20 00:48:57 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment c6cd5f03-7133-4fa3-9f30-ffde84d04ee3 0xc002179017 0xc002179018}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002179088 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 20 00:49:02.055: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 20 00:49:02.055: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7425 /apis/apps/v1/namespaces/deployment-7425/replicasets/test-rolling-update-controller fca7f957-5aad-41a7-b13d-0dbfccfaed3c 9475628 2 2020-04-20 00:48:52 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment c6cd5f03-7133-4fa3-9f30-ffde84d04ee3 0xc002178f47 0xc002178f48}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002178fa8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 20 00:49:02.060: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-4fzlv" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-4fzlv test-rolling-update-deployment-664dd8fc7f- deployment-7425 /api/v1/namespaces/deployment-7425/pods/test-rolling-update-deployment-664dd8fc7f-4fzlv 80009acb-eaa0-43eb-98bc-8353c2b6e37f 9475614 0 2020-04-20 00:48:57 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 84d20394-8e0f-4715-8035-b1cad4bc1b9c 0xc002179567 0xc002179568}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9fm2s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9fm2s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9fm2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:48:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:49:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:49:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:48:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.201,StartTime:2020-04-20 00:48:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-20 00:49:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://4652dd5edfc6f41f7d64b14e5828b1bcdf455441eac69ffa036b83216edd4738,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.201,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:49:02.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7425" for this suite. • [SLOW TEST:9.776 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":250,"skipped":4330,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:49:02.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 20 00:49:06.846: INFO: Successfully updated pod "labelsupdate74258140-651d-4ace-968e-2ab6d7dde40c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:49:08.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9349" for this suite. • [SLOW TEST:6.804 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4344,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:49:08.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-e9acfa83-cc89-46b9-862c-3e2b3e96066d STEP: Creating a pod to test consume configMaps Apr 20 00:49:08.995: INFO: Waiting up to 5m0s for pod "pod-configmaps-2ac1114d-b315-44d9-ad35-b18ab3fd5ac6" in namespace "configmap-1535" to be "Succeeded or Failed" Apr 20 00:49:09.015: INFO: Pod "pod-configmaps-2ac1114d-b315-44d9-ad35-b18ab3fd5ac6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.72456ms Apr 20 00:49:11.039: INFO: Pod "pod-configmaps-2ac1114d-b315-44d9-ad35-b18ab3fd5ac6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043721148s Apr 20 00:49:13.043: INFO: Pod "pod-configmaps-2ac1114d-b315-44d9-ad35-b18ab3fd5ac6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047915487s STEP: Saw pod success Apr 20 00:49:13.043: INFO: Pod "pod-configmaps-2ac1114d-b315-44d9-ad35-b18ab3fd5ac6" satisfied condition "Succeeded or Failed" Apr 20 00:49:13.046: INFO: Trying to get logs from node latest-worker pod pod-configmaps-2ac1114d-b315-44d9-ad35-b18ab3fd5ac6 container configmap-volume-test: STEP: delete the pod Apr 20 00:49:13.080: INFO: Waiting for pod pod-configmaps-2ac1114d-b315-44d9-ad35-b18ab3fd5ac6 to disappear Apr 20 00:49:13.090: INFO: Pod pod-configmaps-2ac1114d-b315-44d9-ad35-b18ab3fd5ac6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:49:13.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1535" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4370,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:49:13.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-58b204aa-2390-415d-8ba2-1635b3052011 STEP: Creating secret with name s-test-opt-upd-914e9d87-ff7e-442e-8df8-723d21d2a8b7 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-58b204aa-2390-415d-8ba2-1635b3052011 STEP: Updating secret s-test-opt-upd-914e9d87-ff7e-442e-8df8-723d21d2a8b7 STEP: Creating secret with name s-test-opt-create-903fd187-53a0-4f35-8e4d-e0cbf5cae1d3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:50:49.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2723" for this suite. • [SLOW TEST:96.662 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4374,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:50:49.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 20 00:50:50.547: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 20 00:50:52.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940650, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940650, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940650, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940650, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 00:50:54.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940650, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940650, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940650, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940650, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 20 00:50:57.600: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:50:57.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:50:58.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9766" for this suite. STEP: Destroying namespace "webhook-9766-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.076 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":254,"skipped":4377,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:50:58.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 20 00:50:58.914: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9091 /api/v1/namespaces/watch-9091/configmaps/e2e-watch-test-resource-version b7b93163-706c-499b-9001-5475268d85be 9476174 0 2020-04-20 00:50:58 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 20 00:50:58.914: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9091 /api/v1/namespaces/watch-9091/configmaps/e2e-watch-test-resource-version b7b93163-706c-499b-9001-5475268d85be 9476175 0 2020-04-20 00:50:58 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:50:58.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9091" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":255,"skipped":4381,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:50:58.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 20 00:50:59.910: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 20 00:51:01.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940659, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940659, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940659, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722940659, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 20 00:51:04.950: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:51:04.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6321-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:51:06.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7337" for this suite. STEP: Destroying namespace "webhook-7337-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.295 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":256,"skipped":4401,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:51:06.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:51:06.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2439" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":257,"skipped":4455,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:51:06.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-rqtg STEP: Creating a pod to test atomic-volume-subpath Apr 20 00:51:06.442: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rqtg" in namespace "subpath-7407" to be "Succeeded or Failed" Apr 20 00:51:06.445: INFO: Pod "pod-subpath-test-projected-rqtg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.569195ms Apr 20 00:51:08.572: INFO: Pod "pod-subpath-test-projected-rqtg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129080335s Apr 20 00:51:10.576: INFO: Pod "pod-subpath-test-projected-rqtg": Phase="Running", Reason="", readiness=true. Elapsed: 4.13323571s Apr 20 00:51:12.580: INFO: Pod "pod-subpath-test-projected-rqtg": Phase="Running", Reason="", readiness=true. Elapsed: 6.137402904s Apr 20 00:51:14.584: INFO: Pod "pod-subpath-test-projected-rqtg": Phase="Running", Reason="", readiness=true. Elapsed: 8.141803127s Apr 20 00:51:16.588: INFO: Pod "pod-subpath-test-projected-rqtg": Phase="Running", Reason="", readiness=true. Elapsed: 10.145859016s Apr 20 00:51:18.593: INFO: Pod "pod-subpath-test-projected-rqtg": Phase="Running", Reason="", readiness=true. Elapsed: 12.150328091s Apr 20 00:51:20.597: INFO: Pod "pod-subpath-test-projected-rqtg": Phase="Running", Reason="", readiness=true. Elapsed: 14.154094126s Apr 20 00:51:22.601: INFO: Pod "pod-subpath-test-projected-rqtg": Phase="Running", Reason="", readiness=true. Elapsed: 16.158256894s Apr 20 00:51:24.605: INFO: Pod "pod-subpath-test-projected-rqtg": Phase="Running", Reason="", readiness=true. Elapsed: 18.162711826s Apr 20 00:51:26.609: INFO: Pod "pod-subpath-test-projected-rqtg": Phase="Running", Reason="", readiness=true. Elapsed: 20.166747004s Apr 20 00:51:28.614: INFO: Pod "pod-subpath-test-projected-rqtg": Phase="Running", Reason="", readiness=true. Elapsed: 22.171014207s Apr 20 00:51:30.618: INFO: Pod "pod-subpath-test-projected-rqtg": Phase="Running", Reason="", readiness=true. Elapsed: 24.175216737s Apr 20 00:51:32.622: INFO: Pod "pod-subpath-test-projected-rqtg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.179598554s STEP: Saw pod success Apr 20 00:51:32.622: INFO: Pod "pod-subpath-test-projected-rqtg" satisfied condition "Succeeded or Failed" Apr 20 00:51:32.626: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-rqtg container test-container-subpath-projected-rqtg: STEP: delete the pod Apr 20 00:51:32.675: INFO: Waiting for pod pod-subpath-test-projected-rqtg to disappear Apr 20 00:51:32.683: INFO: Pod pod-subpath-test-projected-rqtg no longer exists STEP: Deleting pod pod-subpath-test-projected-rqtg Apr 20 00:51:32.683: INFO: Deleting pod "pod-subpath-test-projected-rqtg" in namespace "subpath-7407" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:51:32.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7407" for this suite. • [SLOW TEST:26.350 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":258,"skipped":4468,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:51:32.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:51:32.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4192" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":259,"skipped":4481,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:51:32.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-2982 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2982 to expose endpoints map[] Apr 20 00:51:32.892: INFO: Get endpoints failed (15.026612ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 20 00:51:33.897: INFO: successfully validated that service multi-endpoint-test in namespace services-2982 exposes endpoints map[] (1.019388325s elapsed) STEP: Creating pod pod1 in namespace services-2982 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2982 to expose endpoints map[pod1:[100]] Apr 20 00:51:37.156: INFO: successfully validated that service multi-endpoint-test in namespace services-2982 exposes endpoints map[pod1:[100]] (3.252447203s elapsed) STEP: Creating pod pod2 in namespace services-2982 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2982 to expose endpoints map[pod1:[100] pod2:[101]] Apr 20 00:51:40.247: INFO: successfully validated that service multi-endpoint-test in namespace services-2982 exposes endpoints map[pod1:[100] pod2:[101]] (3.086774851s elapsed) STEP: Deleting pod pod1 in namespace services-2982 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2982 to expose endpoints map[pod2:[101]] Apr 20 00:51:40.281: INFO: successfully validated that service multi-endpoint-test in namespace services-2982 exposes endpoints map[pod2:[101]] (25.725968ms elapsed) STEP: Deleting pod pod2 in namespace services-2982 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2982 to expose endpoints map[] Apr 20 00:51:41.299: INFO: successfully validated that service multi-endpoint-test in namespace services-2982 exposes endpoints map[] (1.014099077s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:51:41.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2982" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:8.635 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":260,"skipped":4484,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:51:41.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 20 00:51:41.448: INFO: Waiting up to 5m0s for pod "downward-api-c885756b-a423-4de0-9c71-4016bd434da3" in namespace "downward-api-9500" to be "Succeeded or Failed" Apr 20 00:51:41.452: INFO: Pod "downward-api-c885756b-a423-4de0-9c71-4016bd434da3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.936978ms Apr 20 00:51:43.456: INFO: Pod "downward-api-c885756b-a423-4de0-9c71-4016bd434da3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008455116s Apr 20 00:51:45.461: INFO: Pod "downward-api-c885756b-a423-4de0-9c71-4016bd434da3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013355234s STEP: Saw pod success Apr 20 00:51:45.461: INFO: Pod "downward-api-c885756b-a423-4de0-9c71-4016bd434da3" satisfied condition "Succeeded or Failed" Apr 20 00:51:45.464: INFO: Trying to get logs from node latest-worker2 pod downward-api-c885756b-a423-4de0-9c71-4016bd434da3 container dapi-container: STEP: delete the pod Apr 20 00:51:45.514: INFO: Waiting for pod downward-api-c885756b-a423-4de0-9c71-4016bd434da3 to disappear Apr 20 00:51:45.550: INFO: Pod downward-api-c885756b-a423-4de0-9c71-4016bd434da3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:51:45.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9500" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4485,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:51:45.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 20 00:51:45.651: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6657f53-351e-43fd-8a4c-794f6b0e35b4" in namespace "projected-7982" to be "Succeeded or Failed" Apr 20 00:51:45.659: INFO: Pod "downwardapi-volume-e6657f53-351e-43fd-8a4c-794f6b0e35b4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.859203ms Apr 20 00:51:47.663: INFO: Pod "downwardapi-volume-e6657f53-351e-43fd-8a4c-794f6b0e35b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012067713s Apr 20 00:51:49.668: INFO: Pod "downwardapi-volume-e6657f53-351e-43fd-8a4c-794f6b0e35b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016555069s STEP: Saw pod success Apr 20 00:51:49.668: INFO: Pod "downwardapi-volume-e6657f53-351e-43fd-8a4c-794f6b0e35b4" satisfied condition "Succeeded or Failed" Apr 20 00:51:49.671: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e6657f53-351e-43fd-8a4c-794f6b0e35b4 container client-container: STEP: delete the pod Apr 20 00:51:49.691: INFO: Waiting for pod downwardapi-volume-e6657f53-351e-43fd-8a4c-794f6b0e35b4 to disappear Apr 20 00:51:49.695: INFO: Pod downwardapi-volume-e6657f53-351e-43fd-8a4c-794f6b0e35b4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:51:49.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7982" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4494,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:51:49.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Apr 20 00:51:49.774: INFO: Waiting up to 5m0s for pod "var-expansion-e6100df7-5e74-40e7-bdec-a880abb0f8e8" in namespace "var-expansion-3555" to be "Succeeded or Failed" Apr 20 00:51:49.806: INFO: Pod "var-expansion-e6100df7-5e74-40e7-bdec-a880abb0f8e8": Phase="Pending", Reason="", readiness=false. Elapsed: 31.353706ms Apr 20 00:51:51.809: INFO: Pod "var-expansion-e6100df7-5e74-40e7-bdec-a880abb0f8e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034358435s Apr 20 00:51:53.813: INFO: Pod "var-expansion-e6100df7-5e74-40e7-bdec-a880abb0f8e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038384898s STEP: Saw pod success Apr 20 00:51:53.813: INFO: Pod "var-expansion-e6100df7-5e74-40e7-bdec-a880abb0f8e8" satisfied condition "Succeeded or Failed" Apr 20 00:51:53.816: INFO: Trying to get logs from node latest-worker pod var-expansion-e6100df7-5e74-40e7-bdec-a880abb0f8e8 container dapi-container: STEP: delete the pod Apr 20 00:51:53.848: INFO: Waiting for pod var-expansion-e6100df7-5e74-40e7-bdec-a880abb0f8e8 to disappear Apr 20 00:51:53.857: INFO: Pod var-expansion-e6100df7-5e74-40e7-bdec-a880abb0f8e8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:51:53.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3555" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4514,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:51:53.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-27cd41aa-9bd5-4436-ba0b-cef32519da63 STEP: Creating a pod to test consume secrets Apr 20 00:51:53.927: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4492cbef-3a99-4a38-a142-218e9e446d3f" in namespace "projected-747" to be "Succeeded or Failed" Apr 20 00:51:53.945: INFO: Pod "pod-projected-secrets-4492cbef-3a99-4a38-a142-218e9e446d3f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.274989ms Apr 20 00:51:55.949: INFO: Pod "pod-projected-secrets-4492cbef-3a99-4a38-a142-218e9e446d3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022211919s Apr 20 00:51:57.954: INFO: Pod "pod-projected-secrets-4492cbef-3a99-4a38-a142-218e9e446d3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02676464s STEP: Saw pod success Apr 20 00:51:57.954: INFO: Pod "pod-projected-secrets-4492cbef-3a99-4a38-a142-218e9e446d3f" satisfied condition "Succeeded or Failed" Apr 20 00:51:57.956: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-4492cbef-3a99-4a38-a142-218e9e446d3f container projected-secret-volume-test: STEP: delete the pod Apr 20 00:51:57.989: INFO: Waiting for pod pod-projected-secrets-4492cbef-3a99-4a38-a142-218e9e446d3f to disappear Apr 20 00:51:58.009: INFO: Pod pod-projected-secrets-4492cbef-3a99-4a38-a142-218e9e446d3f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:51:58.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-747" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4527,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:51:58.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 20 00:51:58.084: INFO: Creating deployment "webserver-deployment" Apr 20 00:51:58.093: INFO: Waiting for observed generation 1 Apr 20 00:52:00.103: INFO: Waiting for all required pods to come up Apr 20 00:52:00.107: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 20 00:52:08.116: INFO: Waiting for deployment "webserver-deployment" to complete Apr 20 00:52:08.121: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 20 00:52:08.126: INFO: Updating deployment webserver-deployment Apr 20 00:52:08.127: INFO: Waiting for observed generation 2 Apr 20 00:52:10.139: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 20 00:52:10.142: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 20 00:52:10.145: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 20 00:52:10.152: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 20 00:52:10.152: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 20 00:52:10.154: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 20 00:52:10.157: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 20 00:52:10.157: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 20 00:52:10.162: INFO: Updating deployment webserver-deployment Apr 20 00:52:10.162: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 20 00:52:10.272: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 20 00:52:10.283: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 20 00:52:10.406: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5600 /apis/apps/v1/namespaces/deployment-5600/deployments/webserver-deployment bd7347fe-6c36-42ec-ae82-2926ae1cdc7d 9476870 3 2020-04-20 00:51:58 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002be1848 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-20 00:52:08 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-20 00:52:10 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 20 00:52:10.474: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5600 /apis/apps/v1/namespaces/deployment-5600/replicasets/webserver-deployment-c7997dcc8 0593fba3-8406-4660-8a25-99c8d1b70f0f 9476911 3 2020-04-20 00:52:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment bd7347fe-6c36-42ec-ae82-2926ae1cdc7d 0xc002cba177 0xc002cba178}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cba1f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 20 00:52:10.474: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 20 00:52:10.474: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5600 /apis/apps/v1/namespaces/deployment-5600/replicasets/webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 9476910 3 2020-04-20 00:51:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment bd7347fe-6c36-42ec-ae82-2926ae1cdc7d 0xc002cba0b7 0xc002cba0b8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cba118 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 20 00:52:10.565: INFO: Pod "webserver-deployment-595b5b9587-2qbdj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2qbdj webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-2qbdj 25162225-0bba-4dfa-8258-02503d3b6050 9476874 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cba717 0xc002cba718}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.565: INFO: Pod "webserver-deployment-595b5b9587-4rpkr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4rpkr webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-4rpkr 3c2cd305-166a-4ac0-b96c-923ff2ba28cd 9476875 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cba867 0xc002cba868}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.565: INFO: Pod "webserver-deployment-595b5b9587-4xvhc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4xvhc webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-4xvhc 4c103c81-c3a7-4302-bfab-f1cdaf943ac3 9476721 0 2020-04-20 00:51:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cba987 0xc002cba988}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.206,StartTime:2020-04-20 00:51:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-20 00:52:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ba3e0c2c695c14947ffc1e984660fa3f1ea1318c7d0ab2486fc278e33df3b501,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.565: INFO: Pod "webserver-deployment-595b5b9587-7jr57" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7jr57 webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-7jr57 6ec0c449-8b02-4143-ac4b-ee939c7d1bed 9476889 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbab07 0xc002cbab08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.566: INFO: Pod "webserver-deployment-595b5b9587-87rfx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-87rfx webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-87rfx fbc653ab-7d8d-4139-8dd1-be12f294dad7 9476786 0 2020-04-20 00:51:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbac27 0xc002cbac28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.210,StartTime:2020-04-20 00:51:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-20 00:52:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://25a02250396151c7d1a9175bde16e638f695a4299b99c81e2dc7fb6d3c91cd98,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.210,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.566: INFO: Pod "webserver-deployment-595b5b9587-8pv8p" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8pv8p webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-8pv8p 9c211dae-dc99-4e01-8a39-1ddc2875f31d 9476780 0 2020-04-20 00:51:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbada7 0xc002cbada8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.47,StartTime:2020-04-20 00:51:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-20 00:52:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b18e5c1e38ec469ab2162228e8e3435b69cc836018f8f43b974a49f68f6a0350,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.47,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.566: INFO: Pod "webserver-deployment-595b5b9587-9cxlm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9cxlm webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-9cxlm 9e7b723e-f1b7-4e76-8f38-429b894cb8f2 9476901 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbaf47 0xc002cbaf48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.566: INFO: Pod "webserver-deployment-595b5b9587-bf6xq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bf6xq webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-bf6xq 5569d153-0f71-4fe2-ad64-5d75c4386081 9476736 0 2020-04-20 00:51:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbb067 0xc002cbb068}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.44,StartTime:2020-04-20 00:51:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-20 00:52:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a4222122486b3684c90718b89d19c2d98593c58c23699d2fe7a93541d72150fc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.566: INFO: Pod "webserver-deployment-595b5b9587-bvwnn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bvwnn webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-bvwnn 54e55197-81fe-4d3a-895d-9f422e88c8c9 9476902 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbb207 0xc002cbb208}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.567: INFO: Pod "webserver-deployment-595b5b9587-bz7vn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bz7vn webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-bz7vn dd6c548f-2ff3-4815-a9c0-f58618c10733 9476907 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbb327 0xc002cbb328}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-20 00:52:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.567: INFO: Pod "webserver-deployment-595b5b9587-dzsww" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dzsww webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-dzsww f960dcf6-7477-4fcd-98c0-8e31999727ac 9476904 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbb487 0xc002cbb488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.567: INFO: Pod "webserver-deployment-595b5b9587-fnzs8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fnzs8 webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-fnzs8 2a20fb30-4927-4ee1-aeae-fa7329cc8278 9476752 0 2020-04-20 00:51:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbb5b7 0xc002cbb5b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.208,StartTime:2020-04-20 00:51:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-20 00:52:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fe89b5c7f1d009f57694a634a6369075a8e6e62f160e260ea47c341bbbe16d48,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.208,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.567: INFO: Pod "webserver-deployment-595b5b9587-mqr99" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mqr99 webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-mqr99 173dd360-53a1-42fb-9403-bbe21cebe167 9476879 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbb747 0xc002cbb748}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.567: INFO: Pod "webserver-deployment-595b5b9587-ph9h7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ph9h7 webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-ph9h7 d465b578-536b-446a-8cae-b91740f92496 9476900 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbb877 0xc002cbb878}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.567: INFO: Pod "webserver-deployment-595b5b9587-qnrc7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qnrc7 webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-qnrc7 10bcdc97-007c-4301-9dd7-68e199a016f8 9476739 0 2020-04-20 00:51:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbb9d7 0xc002cbb9d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.207,StartTime:2020-04-20 00:51:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-20 00:52:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://38d967095a52201126804939a803183f7a808940e23a324ff9820317026ad259,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.207,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.568: INFO: Pod "webserver-deployment-595b5b9587-shcnm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-shcnm webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-shcnm 4d610fbf-9288-4a87-8acf-65db5f213b3a 9476930 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbbba7 0xc002cbbba8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-20 00:52:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.568: INFO: Pod "webserver-deployment-595b5b9587-wg5z7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wg5z7 webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-wg5z7 37b38e50-e1c1-4ad7-8ea2-f0d0e7318087 9476906 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbbd07 0xc002cbbd08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.568: INFO: Pod "webserver-deployment-595b5b9587-xlw5g" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xlw5g webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-xlw5g 30cc9134-bf99-48e7-a321-34dd2ab4f9dd 9476886 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbbe27 0xc002cbbe28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.568: INFO: Pod "webserver-deployment-595b5b9587-zqpb8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zqpb8 webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-zqpb8 480bf9aa-5b70-4124-9a42-b39863fece5c 9476762 0 2020-04-20 00:51:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002cbbf47 0xc002cbbf48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.45,StartTime:2020-04-20 00:51:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-20 00:52:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a265b689825502a9d9326e1d4051f4ad95850f2c10edb06413715448f2d5966b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.45,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.568: INFO: Pod "webserver-deployment-595b5b9587-zw24b" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zw24b webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-zw24b f987030c-336d-4c96-8c9c-e8e20f0a80bf 9476789 0 2020-04-20 00:51:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df32f4ed-8146-4a5a-962a-62fca3185022 0xc002b781d7 0xc002b781d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:51:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.209,StartTime:2020-04-20 00:51:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-20 00:52:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e524a66d4ee005c15f1219e7afc857d1d809b6236a060988ace63599787b702c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.209,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.568: INFO: Pod "webserver-deployment-c7997dcc8-55sdv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-55sdv webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-55sdv f40f1776-7b62-4bdb-a9cd-e71147277279 9476915 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0593fba3-8406-4660-8a25-99c8d1b70f0f 0xc002b78417 0xc002b78418}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.568: INFO: Pod "webserver-deployment-c7997dcc8-6rtjn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6rtjn webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-6rtjn 9822fe85-a0be-45d7-b506-0459dd0a100d 9476833 0 2020-04-20 00:52:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0593fba3-8406-4660-8a25-99c8d1b70f0f 0xc002b78557 0xc002b78558}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-20 00:52:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.569: INFO: Pod "webserver-deployment-c7997dcc8-9qk28" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9qk28 webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-9qk28 9339ad7b-b820-49f5-bc8e-2186cff0b415 9476914 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0593fba3-8406-4660-8a25-99c8d1b70f0f 0xc002b786d7 0xc002b786d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.569: INFO: Pod "webserver-deployment-c7997dcc8-bnz5t" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bnz5t webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-bnz5t ea0d1227-9da3-4233-a880-abb73bd8d56b 9476824 0 2020-04-20 00:52:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0593fba3-8406-4660-8a25-99c8d1b70f0f 0xc002b78807 0xc002b78808}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-20 00:52:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.569: INFO: Pod "webserver-deployment-c7997dcc8-cc4wd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cc4wd webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-cc4wd 5a2e8244-d43e-4741-a931-ffc702d95ee5 9476846 0 2020-04-20 00:52:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0593fba3-8406-4660-8a25-99c8d1b70f0f 0xc002b78987 0xc002b78988}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-20 00:52:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.569: INFO: Pod "webserver-deployment-c7997dcc8-fmfqx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fmfqx webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-fmfqx f345d621-0a02-4e6f-8836-4ed6e1915e4c 9476922 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0593fba3-8406-4660-8a25-99c8d1b70f0f 0xc002b78bb7 0xc002b78bb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-20 00:52:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.569: INFO: Pod "webserver-deployment-c7997dcc8-jxdlv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jxdlv webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-jxdlv 47e0b8e4-be5f-4514-9a45-de3dd42881aa 9476819 0 2020-04-20 00:52:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0593fba3-8406-4660-8a25-99c8d1b70f0f 0xc002b78e57 0xc002b78e58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-20 00:52:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.570: INFO: Pod "webserver-deployment-c7997dcc8-l89dl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l89dl webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-l89dl 498d3032-a6d4-43d8-990a-cddaca6e2fd2 9476887 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0593fba3-8406-4660-8a25-99c8d1b70f0f 0xc002b79097 0xc002b79098}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.570: INFO: Pod "webserver-deployment-c7997dcc8-lqdcl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lqdcl webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-lqdcl accda828-a459-49e3-b6c7-6cd7a390b15e 9476909 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0593fba3-8406-4660-8a25-99c8d1b70f0f 0xc002b79277 0xc002b79278}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.570: INFO: Pod "webserver-deployment-c7997dcc8-lxwcs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lxwcs webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-lxwcs fc3d8c3e-79aa-4679-88f0-2a6582d408f7 9476881 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0593fba3-8406-4660-8a25-99c8d1b70f0f 0xc002b79417 0xc002b79418}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.570: INFO: Pod "webserver-deployment-c7997dcc8-rxxbc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rxxbc webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-rxxbc 7ebd5604-8d81-4f20-a68f-37e5715ea7b6 9476845 0 2020-04-20 00:52:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0593fba3-8406-4660-8a25-99c8d1b70f0f 0xc002b796b7 0xc002b796b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-20 00:52:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.570: INFO: Pod "webserver-deployment-c7997dcc8-vnx94" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vnx94 webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-vnx94 a0b0dd55-2b93-4209-a993-93951826ab55 9476919 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0593fba3-8406-4660-8a25-99c8d1b70f0f 0xc002b798c7 0xc002b798c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 20 00:52:10.570: INFO: Pod "webserver-deployment-c7997dcc8-z647s" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z647s webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-z647s 9373439a-bc82-4bf6-bcf8-b34aa504291a 9476908 0 2020-04-20 00:52:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0593fba3-8406-4660-8a25-99c8d1b70f0f 0xc002b79ba7 0xc002b79ba8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jmfrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jmfrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jmfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-20 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:52:10.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5600" for this suite. • [SLOW TEST:12.772 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":265,"skipped":4539,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:52:10.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-5bcd2ce5-b8f2-48c6-95c0-a517c7712746 STEP: Creating configMap with name cm-test-opt-upd-413c7e34-7943-4b82-af33-59f10a345d2f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5bcd2ce5-b8f2-48c6-95c0-a517c7712746 STEP: Updating configmap cm-test-opt-upd-413c7e34-7943-4b82-af33-59f10a345d2f STEP: Creating configMap with name cm-test-opt-create-9688e024-28ec-41a0-8667-300fe8f00bb9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:53:51.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1733" for this suite. • [SLOW TEST:100.555 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4585,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:53:51.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Apr 20 00:53:51.475: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4714" to be "Succeeded or Failed" Apr 20 00:53:51.479: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.935592ms Apr 20 00:53:53.483: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007686457s Apr 20 00:53:55.487: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011796329s Apr 20 00:53:57.491: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016081504s STEP: Saw pod success Apr 20 00:53:57.491: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Apr 20 00:53:57.494: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 20 00:53:57.535: INFO: Waiting for pod pod-host-path-test to disappear Apr 20 00:53:57.545: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:53:57.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4714" for this suite. • [SLOW TEST:6.209 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4635,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:53:57.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-26 STEP: creating replication controller nodeport-test in namespace services-26 I0420 00:53:57.863004 8 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-26, replica count: 2 I0420 00:54:00.913564 8 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 00:54:03.913798 8 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 20 00:54:03.913: INFO: Creating new exec pod Apr 20 00:54:08.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-26 execpod6xb9s -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 20 00:54:09.144: INFO: stderr: "I0420 00:54:09.072106 3235 log.go:172] (0xc000bb6370) (0xc0009de000) Create stream\nI0420 00:54:09.072164 3235 log.go:172] (0xc000bb6370) (0xc0009de000) Stream added, broadcasting: 1\nI0420 00:54:09.075605 3235 log.go:172] (0xc000bb6370) Reply frame received for 1\nI0420 00:54:09.075657 3235 log.go:172] (0xc000bb6370) (0xc0009e8000) Create stream\nI0420 00:54:09.075677 3235 log.go:172] (0xc000bb6370) (0xc0009e8000) Stream added, broadcasting: 3\nI0420 00:54:09.076640 3235 log.go:172] (0xc000bb6370) Reply frame received for 3\nI0420 00:54:09.076679 3235 log.go:172] (0xc000bb6370) (0xc0009e80a0) Create stream\nI0420 00:54:09.076696 3235 log.go:172] (0xc000bb6370) (0xc0009e80a0) Stream added, broadcasting: 5\nI0420 00:54:09.077920 3235 log.go:172] (0xc000bb6370) Reply frame received for 5\nI0420 00:54:09.137559 3235 log.go:172] (0xc000bb6370) Data frame received for 5\nI0420 00:54:09.137590 3235 log.go:172] (0xc0009e80a0) (5) Data frame handling\nI0420 00:54:09.137611 3235 log.go:172] (0xc0009e80a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0420 00:54:09.137829 3235 log.go:172] (0xc000bb6370) Data frame received for 5\nI0420 00:54:09.137848 3235 log.go:172] (0xc0009e80a0) (5) Data frame handling\nI0420 00:54:09.137858 3235 log.go:172] (0xc0009e80a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0420 00:54:09.137987 3235 log.go:172] (0xc000bb6370) Data frame received for 5\nI0420 00:54:09.137996 3235 log.go:172] (0xc0009e80a0) (5) Data frame handling\nI0420 00:54:09.138257 3235 log.go:172] (0xc000bb6370) Data frame received for 3\nI0420 00:54:09.138277 3235 log.go:172] (0xc0009e8000) (3) Data frame handling\nI0420 00:54:09.139813 3235 log.go:172] (0xc000bb6370) Data frame received for 1\nI0420 00:54:09.139836 3235 log.go:172] (0xc0009de000) (1) Data frame handling\nI0420 00:54:09.139856 3235 log.go:172] (0xc0009de000) (1) Data frame sent\nI0420 00:54:09.139873 3235 log.go:172] (0xc000bb6370) (0xc0009de000) Stream removed, broadcasting: 1\nI0420 00:54:09.139889 3235 log.go:172] (0xc000bb6370) Go away received\nI0420 00:54:09.140359 3235 log.go:172] (0xc000bb6370) (0xc0009de000) Stream removed, broadcasting: 1\nI0420 00:54:09.140382 3235 log.go:172] (0xc000bb6370) (0xc0009e8000) Stream removed, broadcasting: 3\nI0420 00:54:09.140395 3235 log.go:172] (0xc000bb6370) (0xc0009e80a0) Stream removed, broadcasting: 5\n" Apr 20 00:54:09.144: INFO: stdout: "" Apr 20 00:54:09.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-26 execpod6xb9s -- /bin/sh -x -c nc -zv -t -w 2 10.96.61.115 80' Apr 20 00:54:09.353: INFO: stderr: "I0420 00:54:09.264013 3255 log.go:172] (0xc000b6cfd0) (0xc000950320) Create stream\nI0420 00:54:09.264060 3255 log.go:172] (0xc000b6cfd0) (0xc000950320) Stream added, broadcasting: 1\nI0420 00:54:09.268775 3255 log.go:172] (0xc000b6cfd0) Reply frame received for 1\nI0420 00:54:09.268826 3255 log.go:172] (0xc000b6cfd0) (0xc0006e1680) Create stream\nI0420 00:54:09.268843 3255 log.go:172] (0xc000b6cfd0) (0xc0006e1680) Stream added, broadcasting: 3\nI0420 00:54:09.269892 3255 log.go:172] (0xc000b6cfd0) Reply frame received for 3\nI0420 00:54:09.269926 3255 log.go:172] (0xc000b6cfd0) (0xc000520aa0) Create stream\nI0420 00:54:09.269937 3255 log.go:172] (0xc000b6cfd0) (0xc000520aa0) Stream added, broadcasting: 5\nI0420 00:54:09.270816 3255 log.go:172] (0xc000b6cfd0) Reply frame received for 5\nI0420 00:54:09.345850 3255 log.go:172] (0xc000b6cfd0) Data frame received for 3\nI0420 00:54:09.345893 3255 log.go:172] (0xc0006e1680) (3) Data frame handling\nI0420 00:54:09.345917 3255 log.go:172] (0xc000b6cfd0) Data frame received for 5\nI0420 00:54:09.345927 3255 log.go:172] (0xc000520aa0) (5) Data frame handling\nI0420 00:54:09.345946 3255 log.go:172] (0xc000520aa0) (5) Data frame sent\nI0420 00:54:09.345964 3255 log.go:172] (0xc000b6cfd0) Data frame received for 5\nI0420 00:54:09.345983 3255 log.go:172] (0xc000520aa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.61.115 80\nConnection to 10.96.61.115 80 port [tcp/http] succeeded!\nI0420 00:54:09.347420 3255 log.go:172] (0xc000b6cfd0) Data frame received for 1\nI0420 00:54:09.347444 3255 log.go:172] (0xc000950320) (1) Data frame handling\nI0420 00:54:09.347465 3255 log.go:172] (0xc000950320) (1) Data frame sent\nI0420 00:54:09.347480 3255 log.go:172] (0xc000b6cfd0) (0xc000950320) Stream removed, broadcasting: 1\nI0420 00:54:09.347735 3255 log.go:172] (0xc000b6cfd0) Go away received\nI0420 00:54:09.347848 3255 log.go:172] (0xc000b6cfd0) (0xc000950320) Stream removed, broadcasting: 1\nI0420 00:54:09.347875 3255 log.go:172] (0xc000b6cfd0) (0xc0006e1680) Stream removed, broadcasting: 3\nI0420 00:54:09.347886 3255 log.go:172] (0xc000b6cfd0) (0xc000520aa0) Stream removed, broadcasting: 5\n" Apr 20 00:54:09.353: INFO: stdout: "" Apr 20 00:54:09.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-26 execpod6xb9s -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30449' Apr 20 00:54:09.547: INFO: stderr: "I0420 00:54:09.466430 3275 log.go:172] (0xc000808a50) (0xc000831680) Create stream\nI0420 00:54:09.466490 3275 log.go:172] (0xc000808a50) (0xc000831680) Stream added, broadcasting: 1\nI0420 00:54:09.468669 3275 log.go:172] (0xc000808a50) Reply frame received for 1\nI0420 00:54:09.468714 3275 log.go:172] (0xc000808a50) (0xc000918000) Create stream\nI0420 00:54:09.468729 3275 log.go:172] (0xc000808a50) (0xc000918000) Stream added, broadcasting: 3\nI0420 00:54:09.469863 3275 log.go:172] (0xc000808a50) Reply frame received for 3\nI0420 00:54:09.469903 3275 log.go:172] (0xc000808a50) (0xc000831720) Create stream\nI0420 00:54:09.469916 3275 log.go:172] (0xc000808a50) (0xc000831720) Stream added, broadcasting: 5\nI0420 00:54:09.470887 3275 log.go:172] (0xc000808a50) Reply frame received for 5\nI0420 00:54:09.540339 3275 log.go:172] (0xc000808a50) Data frame received for 3\nI0420 00:54:09.540369 3275 log.go:172] (0xc000918000) (3) Data frame handling\nI0420 00:54:09.540397 3275 log.go:172] (0xc000808a50) Data frame received for 5\nI0420 00:54:09.540406 3275 log.go:172] (0xc000831720) (5) Data frame handling\nI0420 00:54:09.540416 3275 log.go:172] (0xc000831720) (5) Data frame sent\nI0420 00:54:09.540424 3275 log.go:172] (0xc000808a50) Data frame received for 5\nI0420 00:54:09.540431 3275 log.go:172] (0xc000831720) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30449\nConnection to 172.17.0.13 30449 port [tcp/30449] succeeded!\nI0420 00:54:09.541945 3275 log.go:172] (0xc000808a50) Data frame received for 1\nI0420 00:54:09.541960 3275 log.go:172] (0xc000831680) (1) Data frame handling\nI0420 00:54:09.541966 3275 log.go:172] (0xc000831680) (1) Data frame sent\nI0420 00:54:09.541980 3275 log.go:172] (0xc000808a50) (0xc000831680) Stream removed, broadcasting: 1\nI0420 00:54:09.542006 3275 log.go:172] (0xc000808a50) Go away received\nI0420 00:54:09.542589 3275 log.go:172] (0xc000808a50) (0xc000831680) Stream removed, broadcasting: 1\nI0420 00:54:09.542628 3275 log.go:172] (0xc000808a50) (0xc000918000) Stream removed, broadcasting: 3\nI0420 00:54:09.542641 3275 log.go:172] (0xc000808a50) (0xc000831720) Stream removed, broadcasting: 5\n" Apr 20 00:54:09.547: INFO: stdout: "" Apr 20 00:54:09.547: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-26 execpod6xb9s -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30449' Apr 20 00:54:09.741: INFO: stderr: "I0420 00:54:09.675327 3296 log.go:172] (0xc00003a630) (0xc00054cbe0) Create stream\nI0420 00:54:09.675398 3296 log.go:172] (0xc00003a630) (0xc00054cbe0) Stream added, broadcasting: 1\nI0420 00:54:09.685796 3296 log.go:172] (0xc00003a630) Reply frame received for 1\nI0420 00:54:09.685847 3296 log.go:172] (0xc00003a630) (0xc000803360) Create stream\nI0420 00:54:09.685857 3296 log.go:172] (0xc00003a630) (0xc000803360) Stream added, broadcasting: 3\nI0420 00:54:09.687297 3296 log.go:172] (0xc00003a630) Reply frame received for 3\nI0420 00:54:09.687340 3296 log.go:172] (0xc00003a630) (0xc000803540) Create stream\nI0420 00:54:09.687360 3296 log.go:172] (0xc00003a630) (0xc000803540) Stream added, broadcasting: 5\nI0420 00:54:09.688421 3296 log.go:172] (0xc00003a630) Reply frame received for 5\nI0420 00:54:09.734342 3296 log.go:172] (0xc00003a630) Data frame received for 3\nI0420 00:54:09.734380 3296 log.go:172] (0xc000803360) (3) Data frame handling\nI0420 00:54:09.734404 3296 log.go:172] (0xc00003a630) Data frame received for 5\nI0420 00:54:09.734425 3296 log.go:172] (0xc000803540) (5) Data frame handling\nI0420 00:54:09.734445 3296 log.go:172] (0xc000803540) (5) Data frame sent\nI0420 00:54:09.734459 3296 log.go:172] (0xc00003a630) Data frame received for 5\nI0420 00:54:09.734467 3296 log.go:172] (0xc000803540) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30449\nConnection to 172.17.0.12 30449 port [tcp/30449] succeeded!\nI0420 00:54:09.735917 3296 log.go:172] (0xc00003a630) Data frame received for 1\nI0420 00:54:09.735947 3296 log.go:172] (0xc00054cbe0) (1) Data frame handling\nI0420 00:54:09.735965 3296 log.go:172] (0xc00054cbe0) (1) Data frame sent\nI0420 00:54:09.735987 3296 log.go:172] (0xc00003a630) (0xc00054cbe0) Stream removed, broadcasting: 1\nI0420 00:54:09.736080 3296 log.go:172] (0xc00003a630) Go away received\nI0420 00:54:09.736394 3296 log.go:172] (0xc00003a630) (0xc00054cbe0) Stream removed, broadcasting: 1\nI0420 00:54:09.736418 3296 log.go:172] (0xc00003a630) (0xc000803360) Stream removed, broadcasting: 3\nI0420 00:54:09.736432 3296 log.go:172] (0xc00003a630) (0xc000803540) Stream removed, broadcasting: 5\n" Apr 20 00:54:09.741: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:54:09.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-26" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.194 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":268,"skipped":4662,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:54:09.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:54:09.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9253" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":269,"skipped":4665,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:54:09.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-3135e370-066e-486a-beda-c7069a8741cd STEP: Creating a pod to test consume configMaps Apr 20 00:54:09.987: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-07166132-e7c8-4711-8fb0-91570bbccfea" in namespace "projected-8586" to be "Succeeded or Failed" Apr 20 00:54:09.990: INFO: Pod "pod-projected-configmaps-07166132-e7c8-4711-8fb0-91570bbccfea": Phase="Pending", Reason="", readiness=false. Elapsed: 3.208906ms Apr 20 00:54:11.994: INFO: Pod "pod-projected-configmaps-07166132-e7c8-4711-8fb0-91570bbccfea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007534824s Apr 20 00:54:13.999: INFO: Pod "pod-projected-configmaps-07166132-e7c8-4711-8fb0-91570bbccfea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012536132s STEP: Saw pod success Apr 20 00:54:13.999: INFO: Pod "pod-projected-configmaps-07166132-e7c8-4711-8fb0-91570bbccfea" satisfied condition "Succeeded or Failed" Apr 20 00:54:14.003: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-07166132-e7c8-4711-8fb0-91570bbccfea container projected-configmap-volume-test: STEP: delete the pod Apr 20 00:54:14.019: INFO: Waiting for pod pod-projected-configmaps-07166132-e7c8-4711-8fb0-91570bbccfea to disappear Apr 20 00:54:14.030: INFO: Pod pod-projected-configmaps-07166132-e7c8-4711-8fb0-91570bbccfea no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:54:14.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8586" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4668,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:54:14.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-8820 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 20 00:54:14.128: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 20 00:54:14.168: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 20 00:54:16.268: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 20 00:54:18.172: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:54:20.172: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:54:22.173: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:54:24.172: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:54:26.172: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:54:28.173: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:54:30.172: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:54:32.172: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:54:34.172: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 20 00:54:36.173: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 20 00:54:36.178: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 20 00:54:40.238: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.66 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8820 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:54:40.238: INFO: >>> kubeConfig: /root/.kube/config I0420 00:54:40.265818 8 log.go:172] (0xc004508630) (0xc001a6c780) Create stream I0420 00:54:40.265850 8 log.go:172] (0xc004508630) (0xc001a6c780) Stream added, broadcasting: 1 I0420 00:54:40.267575 8 log.go:172] (0xc004508630) Reply frame received for 1 I0420 00:54:40.267643 8 log.go:172] (0xc004508630) (0xc0028e0000) Create stream I0420 00:54:40.267684 8 log.go:172] (0xc004508630) (0xc0028e0000) Stream added, broadcasting: 3 I0420 00:54:40.268432 8 log.go:172] (0xc004508630) Reply frame received for 3 I0420 00:54:40.268463 8 log.go:172] (0xc004508630) (0xc000695ea0) Create stream I0420 00:54:40.268478 8 log.go:172] (0xc004508630) (0xc000695ea0) Stream added, broadcasting: 5 I0420 00:54:40.269399 8 log.go:172] (0xc004508630) Reply frame received for 5 I0420 00:54:41.346421 8 log.go:172] (0xc004508630) Data frame received for 3 I0420 00:54:41.346472 8 log.go:172] (0xc0028e0000) (3) Data frame handling I0420 00:54:41.346496 8 log.go:172] (0xc0028e0000) (3) Data frame sent I0420 00:54:41.346515 8 log.go:172] (0xc004508630) Data frame received for 3 I0420 00:54:41.346534 8 log.go:172] (0xc0028e0000) (3) Data frame handling I0420 00:54:41.346851 8 log.go:172] (0xc004508630) Data frame received for 5 I0420 00:54:41.346885 8 log.go:172] (0xc000695ea0) (5) Data frame handling I0420 00:54:41.349076 8 log.go:172] (0xc004508630) Data frame received for 1 I0420 00:54:41.349139 8 log.go:172] (0xc001a6c780) (1) Data frame handling I0420 00:54:41.349334 8 log.go:172] (0xc001a6c780) (1) Data frame sent I0420 00:54:41.349366 8 log.go:172] (0xc004508630) (0xc001a6c780) Stream removed, broadcasting: 1 I0420 00:54:41.349384 8 log.go:172] (0xc004508630) Go away received I0420 00:54:41.349491 8 log.go:172] (0xc004508630) (0xc001a6c780) Stream removed, broadcasting: 1 I0420 00:54:41.349522 8 log.go:172] (0xc004508630) (0xc0028e0000) Stream removed, broadcasting: 3 I0420 00:54:41.349537 8 log.go:172] (0xc004508630) (0xc000695ea0) Stream removed, broadcasting: 5 Apr 20 00:54:41.349: INFO: Found all expected endpoints: [netserver-0] Apr 20 00:54:41.353: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.225 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8820 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 00:54:41.353: INFO: >>> kubeConfig: /root/.kube/config I0420 00:54:41.388265 8 log.go:172] (0xc002aee370) (0xc0028e0640) Create stream I0420 00:54:41.388293 8 log.go:172] (0xc002aee370) (0xc0028e0640) Stream added, broadcasting: 1 I0420 00:54:41.396528 8 log.go:172] (0xc002aee370) Reply frame received for 1 I0420 00:54:41.396577 8 log.go:172] (0xc002aee370) (0xc000c0a320) Create stream I0420 00:54:41.396594 8 log.go:172] (0xc002aee370) (0xc000c0a320) Stream added, broadcasting: 3 I0420 00:54:41.399665 8 log.go:172] (0xc002aee370) Reply frame received for 3 I0420 00:54:41.399703 8 log.go:172] (0xc002aee370) (0xc001213cc0) Create stream I0420 00:54:41.399720 8 log.go:172] (0xc002aee370) (0xc001213cc0) Stream added, broadcasting: 5 I0420 00:54:41.400472 8 log.go:172] (0xc002aee370) Reply frame received for 5 I0420 00:54:42.472941 8 log.go:172] (0xc002aee370) Data frame received for 3 I0420 00:54:42.472972 8 log.go:172] (0xc000c0a320) (3) Data frame handling I0420 00:54:42.472995 8 log.go:172] (0xc000c0a320) (3) Data frame sent I0420 00:54:42.473006 8 log.go:172] (0xc002aee370) Data frame received for 3 I0420 00:54:42.473017 8 log.go:172] (0xc000c0a320) (3) Data frame handling I0420 00:54:42.473367 8 log.go:172] (0xc002aee370) Data frame received for 5 I0420 00:54:42.473422 8 log.go:172] (0xc001213cc0) (5) Data frame handling I0420 00:54:42.475197 8 log.go:172] (0xc002aee370) Data frame received for 1 I0420 00:54:42.475215 8 log.go:172] (0xc0028e0640) (1) Data frame handling I0420 00:54:42.475227 8 log.go:172] (0xc0028e0640) (1) Data frame sent I0420 00:54:42.475237 8 log.go:172] (0xc002aee370) (0xc0028e0640) Stream removed, broadcasting: 1 I0420 00:54:42.475386 8 log.go:172] (0xc002aee370) (0xc0028e0640) Stream removed, broadcasting: 1 I0420 00:54:42.475435 8 log.go:172] (0xc002aee370) (0xc000c0a320) Stream removed, broadcasting: 3 I0420 00:54:42.475463 8 log.go:172] (0xc002aee370) (0xc001213cc0) Stream removed, broadcasting: 5 Apr 20 00:54:42.475: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 I0420 00:54:42.475526 8 log.go:172] (0xc002aee370) Go away received Apr 20 00:54:42.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8820" for this suite. • [SLOW TEST:28.446 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4669,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:54:42.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Apr 20 00:54:42.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Apr 20 00:54:42.713: INFO: stderr: "" Apr 20 00:54:42.713: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:54:42.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4443" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":272,"skipped":4688,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:54:42.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 20 00:54:42.818: INFO: Waiting up to 5m0s for pod "pod-7ae79fdc-20ef-47d1-931e-3e2a84f55aff" in namespace "emptydir-1501" to be "Succeeded or Failed" Apr 20 00:54:42.822: INFO: Pod "pod-7ae79fdc-20ef-47d1-931e-3e2a84f55aff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.781961ms Apr 20 00:54:44.825: INFO: Pod "pod-7ae79fdc-20ef-47d1-931e-3e2a84f55aff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007185818s Apr 20 00:54:46.829: INFO: Pod "pod-7ae79fdc-20ef-47d1-931e-3e2a84f55aff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011150477s STEP: Saw pod success Apr 20 00:54:46.829: INFO: Pod "pod-7ae79fdc-20ef-47d1-931e-3e2a84f55aff" satisfied condition "Succeeded or Failed" Apr 20 00:54:46.833: INFO: Trying to get logs from node latest-worker pod pod-7ae79fdc-20ef-47d1-931e-3e2a84f55aff container test-container: STEP: delete the pod Apr 20 00:54:46.853: INFO: Waiting for pod pod-7ae79fdc-20ef-47d1-931e-3e2a84f55aff to disappear Apr 20 00:54:46.858: INFO: Pod pod-7ae79fdc-20ef-47d1-931e-3e2a84f55aff no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:54:46.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1501" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4688,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 20 00:54:46.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6792 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6792 STEP: creating replication controller externalsvc in namespace services-6792 I0420 00:54:47.043848 8 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6792, replica count: 2 I0420 00:54:50.094277 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 00:54:53.094547 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 20 00:54:53.131: INFO: Creating new exec pod Apr 20 00:54:57.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6792 execpodp9jbw -- /bin/sh -x -c nslookup clusterip-service' Apr 20 00:54:57.494: INFO: stderr: "I0420 00:54:57.396681 3337 log.go:172] (0xc00003bef0) (0xc0007ff220) Create stream\nI0420 00:54:57.396750 3337 log.go:172] (0xc00003bef0) (0xc0007ff220) Stream added, broadcasting: 1\nI0420 00:54:57.399629 3337 log.go:172] (0xc00003bef0) Reply frame received for 1\nI0420 00:54:57.399675 3337 log.go:172] (0xc00003bef0) (0xc0009d8000) Create stream\nI0420 00:54:57.399694 3337 log.go:172] (0xc00003bef0) (0xc0009d8000) Stream added, broadcasting: 3\nI0420 00:54:57.400601 3337 log.go:172] (0xc00003bef0) Reply frame received for 3\nI0420 00:54:57.400628 3337 log.go:172] (0xc00003bef0) (0xc0007ff400) Create stream\nI0420 00:54:57.400637 3337 log.go:172] (0xc00003bef0) (0xc0007ff400) Stream added, broadcasting: 5\nI0420 00:54:57.401670 3337 log.go:172] (0xc00003bef0) Reply frame received for 5\nI0420 00:54:57.481725 3337 log.go:172] (0xc00003bef0) Data frame received for 5\nI0420 00:54:57.481769 3337 log.go:172] (0xc0007ff400) (5) Data frame handling\nI0420 00:54:57.481812 3337 log.go:172] (0xc0007ff400) (5) Data frame sent\n+ nslookup clusterip-service\nI0420 00:54:57.487484 3337 log.go:172] (0xc00003bef0) Data frame received for 3\nI0420 00:54:57.487500 3337 log.go:172] (0xc0009d8000) (3) Data frame handling\nI0420 00:54:57.487511 3337 log.go:172] (0xc0009d8000) (3) Data frame sent\nI0420 00:54:57.488546 3337 log.go:172] (0xc00003bef0) Data frame received for 3\nI0420 00:54:57.488564 3337 log.go:172] (0xc0009d8000) (3) Data frame handling\nI0420 00:54:57.488580 3337 log.go:172] (0xc0009d8000) (3) Data frame sent\nI0420 00:54:57.489069 3337 log.go:172] (0xc00003bef0) Data frame received for 3\nI0420 00:54:57.489089 3337 log.go:172] (0xc0009d8000) (3) Data frame handling\nI0420 00:54:57.489291 3337 log.go:172] (0xc00003bef0) Data frame received for 5\nI0420 00:54:57.489318 3337 log.go:172] (0xc0007ff400) (5) Data frame handling\nI0420 00:54:57.490973 3337 log.go:172] (0xc00003bef0) Data frame received for 1\nI0420 00:54:57.490986 3337 log.go:172] (0xc0007ff220) (1) Data frame handling\nI0420 00:54:57.490998 3337 log.go:172] (0xc0007ff220) (1) Data frame sent\nI0420 00:54:57.491097 3337 log.go:172] (0xc00003bef0) (0xc0007ff220) Stream removed, broadcasting: 1\nI0420 00:54:57.491117 3337 log.go:172] (0xc00003bef0) Go away received\nI0420 00:54:57.491399 3337 log.go:172] (0xc00003bef0) (0xc0007ff220) Stream removed, broadcasting: 1\nI0420 00:54:57.491414 3337 log.go:172] (0xc00003bef0) (0xc0009d8000) Stream removed, broadcasting: 3\nI0420 00:54:57.491422 3337 log.go:172] (0xc00003bef0) (0xc0007ff400) Stream removed, broadcasting: 5\n" Apr 20 00:54:57.494: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6792.svc.cluster.local\tcanonical name = externalsvc.services-6792.svc.cluster.local.\nName:\texternalsvc.services-6792.svc.cluster.local\nAddress: 10.96.181.219\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6792, will wait for the garbage collector to delete the pods Apr 20 00:54:57.553: INFO: Deleting ReplicationController externalsvc took: 6.310975ms Apr 20 00:54:57.854: INFO: Terminating ReplicationController externalsvc pods took: 300.292556ms Apr 20 00:55:13.080: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 20 00:55:13.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6792" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:26.242 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":274,"skipped":4688,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 20 00:55:13.129: INFO: Running AfterSuite actions on all nodes Apr 20 00:55:13.130: INFO: Running AfterSuite actions on node 1 Apr 20 00:55:13.130: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":274,"skipped":4717,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} Summarizing 1 Failure: [Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:782 Ran 275 of 4992 Specs in 4768.776 seconds FAIL! -- 274 Passed | 1 Failed | 0 Pending | 4717 Skipped --- FAIL: TestE2E (4768.86s) FAIL