I0509 10:46:46.355499 6 e2e.go:224] Starting e2e run "60ac9bb2-91e2-11ea-a20c-0242ac110018" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589021205 - Will randomize all specs Will run 201 of 2164 specs May 9 10:46:46.543: INFO: >>> kubeConfig: /root/.kube/config May 9 10:46:46.548: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 9 10:46:46.570: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 9 10:46:46.603: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 9 10:46:46.603: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 9 10:46:46.603: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 9 10:46:46.610: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 9 10:46:46.610: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 9 10:46:46.610: INFO: e2e test version: v1.13.12 May 9 10:46:46.611: INFO: kube-apiserver version: v1.13.12 SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:46:46.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir May 9 10:46:46.733: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 9 10:46:46.742: INFO: Waiting up to 5m0s for pod "pod-612f6e39-91e2-11ea-a20c-0242ac110018" in namespace "e2e-tests-emptydir-pvm64" to be "success or failure" May 9 10:46:46.769: INFO: Pod "pod-612f6e39-91e2-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 27.795201ms May 9 10:46:48.774: INFO: Pod "pod-612f6e39-91e2-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032588649s May 9 10:46:50.779: INFO: Pod "pod-612f6e39-91e2-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036937237s STEP: Saw pod success May 9 10:46:50.779: INFO: Pod "pod-612f6e39-91e2-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 10:46:50.782: INFO: Trying to get logs from node hunter-worker pod pod-612f6e39-91e2-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 10:46:50.848: INFO: Waiting for pod pod-612f6e39-91e2-11ea-a20c-0242ac110018 to disappear May 9 10:46:50.859: INFO: Pod pod-612f6e39-91e2-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:46:50.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-pvm64" for this suite. May 9 10:46:56.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:46:56.909: INFO: namespace: e2e-tests-emptydir-pvm64, resource: bindings, ignored listing per whitelist May 9 10:46:56.959: INFO: namespace e2e-tests-emptydir-pvm64 deletion completed in 6.096846107s • [SLOW TEST:10.348 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:46:56.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-dt6dn I0509 10:46:57.060356 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-dt6dn, replica count: 1 I0509 10:46:58.110890 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 10:46:59.111113 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 10:47:00.111362 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 10:47:01.111598 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 9 10:47:01.367: INFO: Created: latency-svc-kjxgq May 9 10:47:01.377: INFO: Got endpoints: latency-svc-kjxgq [165.527447ms] May 9 10:47:01.432: INFO: Created: latency-svc-cmtr6 May 9 10:47:01.466: INFO: Got endpoints: latency-svc-cmtr6 [89.142136ms] May 9 10:47:01.530: INFO: Created: latency-svc-k4c6z May 9 10:47:01.533: INFO: Got endpoints: latency-svc-k4c6z [155.003421ms] May 9 10:47:01.569: INFO: Created: latency-svc-f7qqm May 9 10:47:01.581: INFO: Got endpoints: latency-svc-f7qqm [203.169722ms] May 9 10:47:01.611: INFO: Created: latency-svc-4dpgr May 9 10:47:01.623: INFO: Got endpoints: latency-svc-4dpgr [245.522894ms] May 9 10:47:01.673: INFO: Created: latency-svc-nppkf May 9 10:47:01.678: INFO: Got endpoints: latency-svc-nppkf [299.329416ms] May 9 10:47:01.713: INFO: Created: latency-svc-7jfhp May 9 10:47:01.748: INFO: Got endpoints: latency-svc-7jfhp [369.324919ms] May 9 10:47:01.832: INFO: Created: latency-svc-ft8wv May 9 10:47:01.863: INFO: Got endpoints: latency-svc-ft8wv [484.554473ms] May 9 10:47:01.955: INFO: Created: latency-svc-t7xdb May 9 10:47:01.958: INFO: Got endpoints: latency-svc-t7xdb [579.522747ms] May 9 10:47:01.989: INFO: Created: latency-svc-5v25j May 9 10:47:02.006: INFO: Got endpoints: latency-svc-5v25j [627.771834ms] May 9 10:47:02.030: INFO: Created: latency-svc-gn6vt May 9 10:47:02.042: INFO: Got endpoints: latency-svc-gn6vt [663.458666ms] May 9 10:47:02.092: INFO: Created: latency-svc-jrsxr May 9 10:47:02.096: INFO: Got endpoints: latency-svc-jrsxr [716.228955ms] May 9 10:47:02.120: INFO: Created: latency-svc-bbcvt May 9 10:47:02.133: INFO: Got endpoints: latency-svc-bbcvt [753.673455ms] May 9 10:47:02.156: INFO: Created: latency-svc-jtnlw May 9 10:47:02.169: INFO: Got endpoints: latency-svc-jtnlw [789.65509ms] May 9 10:47:02.243: INFO: Created: latency-svc-tt4tw May 9 10:47:02.248: INFO: Got endpoints: latency-svc-tt4tw [867.844623ms] May 9 10:47:02.283: INFO: Created: latency-svc-s6rrk May 9 10:47:02.318: INFO: Got endpoints: latency-svc-s6rrk [938.107887ms] May 9 10:47:02.380: INFO: Created: latency-svc-vhcbj May 9 10:47:02.391: INFO: Got endpoints: latency-svc-vhcbj [924.788534ms] May 9 10:47:02.439: INFO: Created: latency-svc-wnt2l May 9 10:47:02.449: INFO: Got endpoints: latency-svc-wnt2l [915.731781ms] May 9 10:47:02.517: INFO: Created: latency-svc-fpwpx May 9 10:47:02.606: INFO: Got endpoints: latency-svc-fpwpx [1.02509745s] May 9 10:47:02.734: INFO: Created: latency-svc-qszbs May 9 10:47:02.767: INFO: Got endpoints: latency-svc-qszbs [1.143518713s] May 9 10:47:02.865: INFO: Created: latency-svc-ljx7m May 9 10:47:02.869: INFO: Got endpoints: latency-svc-ljx7m [1.191508179s] May 9 10:47:02.900: INFO: Created: latency-svc-gb9mb May 9 10:47:02.904: INFO: Got endpoints: latency-svc-gb9mb [1.156593453s] May 9 10:47:02.930: INFO: Created: latency-svc-k5gcc May 9 10:47:02.935: INFO: Got endpoints: latency-svc-k5gcc [1.071802325s] May 9 10:47:02.954: INFO: Created: latency-svc-jzm54 May 9 10:47:03.032: INFO: Got endpoints: latency-svc-jzm54 [1.073661199s] May 9 10:47:03.051: INFO: Created: latency-svc-kdnh2 May 9 10:47:03.092: INFO: Got endpoints: latency-svc-kdnh2 [1.08539624s] May 9 10:47:03.171: INFO: Created: latency-svc-lf82w May 9 10:47:03.200: INFO: Got endpoints: latency-svc-lf82w [1.157383852s] May 9 10:47:03.242: INFO: Created: latency-svc-9zd86 May 9 10:47:03.255: INFO: Got endpoints: latency-svc-9zd86 [1.158931911s] May 9 10:47:03.332: INFO: Created: latency-svc-vtg9z May 9 10:47:03.335: INFO: Got endpoints: latency-svc-vtg9z [1.202201791s] May 9 10:47:03.368: INFO: Created: latency-svc-z95sj May 9 10:47:03.384: INFO: Got endpoints: latency-svc-z95sj [1.214423032s] May 9 10:47:03.404: INFO: Created: latency-svc-h9kk6 May 9 10:47:03.420: INFO: Got endpoints: latency-svc-h9kk6 [1.172588505s] May 9 10:47:03.469: INFO: Created: latency-svc-whrqb May 9 10:47:03.474: INFO: Got endpoints: latency-svc-whrqb [1.15602477s] May 9 10:47:03.506: INFO: Created: latency-svc-qbx9g May 9 10:47:03.523: INFO: Got endpoints: latency-svc-qbx9g [1.131283454s] May 9 10:47:03.548: INFO: Created: latency-svc-zgvlh May 9 10:47:03.559: INFO: Got endpoints: latency-svc-zgvlh [1.110509959s] May 9 10:47:03.607: INFO: Created: latency-svc-47qdr May 9 10:47:03.626: INFO: Got endpoints: latency-svc-47qdr [1.019886395s] May 9 10:47:03.656: INFO: Created: latency-svc-9kszw May 9 10:47:03.686: INFO: Got endpoints: latency-svc-9kszw [918.91126ms] May 9 10:47:03.775: INFO: Created: latency-svc-vwq92 May 9 10:47:03.781: INFO: Got endpoints: latency-svc-vwq92 [911.858456ms] May 9 10:47:03.830: INFO: Created: latency-svc-k4mp4 May 9 10:47:03.859: INFO: Got endpoints: latency-svc-k4mp4 [954.801196ms] May 9 10:47:03.921: INFO: Created: latency-svc-nq42z May 9 10:47:03.943: INFO: Got endpoints: latency-svc-nq42z [1.00866089s] May 9 10:47:04.010: INFO: Created: latency-svc-grl8p May 9 10:47:04.062: INFO: Got endpoints: latency-svc-grl8p [1.030007122s] May 9 10:47:04.094: INFO: Created: latency-svc-vj47h May 9 10:47:04.112: INFO: Got endpoints: latency-svc-vj47h [1.019830296s] May 9 10:47:04.344: INFO: Created: latency-svc-h4vlz May 9 10:47:04.349: INFO: Got endpoints: latency-svc-h4vlz [1.148650906s] May 9 10:47:04.614: INFO: Created: latency-svc-6x5cr May 9 10:47:04.628: INFO: Got endpoints: latency-svc-6x5cr [1.373379341s] May 9 10:47:04.660: INFO: Created: latency-svc-dkks9 May 9 10:47:04.700: INFO: Got endpoints: latency-svc-dkks9 [1.364443303s] May 9 10:47:04.764: INFO: Created: latency-svc-k46lp May 9 10:47:04.779: INFO: Got endpoints: latency-svc-k46lp [1.394663437s] May 9 10:47:04.973: INFO: Created: latency-svc-9z5p5 May 9 10:47:05.031: INFO: Got endpoints: latency-svc-9z5p5 [1.61018605s] May 9 10:47:05.072: INFO: Created: latency-svc-5mktz May 9 10:47:05.104: INFO: Got endpoints: latency-svc-5mktz [1.629699878s] May 9 10:47:05.138: INFO: Created: latency-svc-zgt5j May 9 10:47:05.156: INFO: Got endpoints: latency-svc-zgt5j [1.633562998s] May 9 10:47:05.193: INFO: Created: latency-svc-btffn May 9 10:47:05.254: INFO: Got endpoints: latency-svc-btffn [1.694382236s] May 9 10:47:05.282: INFO: Created: latency-svc-468ck May 9 10:47:05.348: INFO: Got endpoints: latency-svc-468ck [1.7214533s] May 9 10:47:05.433: INFO: Created: latency-svc-r9rrq May 9 10:47:05.463: INFO: Got endpoints: latency-svc-r9rrq [1.777371681s] May 9 10:47:05.522: INFO: Created: latency-svc-wrl6d May 9 10:47:05.571: INFO: Got endpoints: latency-svc-wrl6d [1.789726423s] May 9 10:47:05.582: INFO: Created: latency-svc-nmsn7 May 9 10:47:05.644: INFO: Got endpoints: latency-svc-nmsn7 [1.784397212s] May 9 10:47:05.721: INFO: Created: latency-svc-brffq May 9 10:47:05.725: INFO: Got endpoints: latency-svc-brffq [1.781709573s] May 9 10:47:05.757: INFO: Created: latency-svc-ld7jr May 9 10:47:05.769: INFO: Got endpoints: latency-svc-ld7jr [1.707306103s] May 9 10:47:05.805: INFO: Created: latency-svc-2dgd7 May 9 10:47:05.818: INFO: Got endpoints: latency-svc-2dgd7 [1.705856931s] May 9 10:47:05.871: INFO: Created: latency-svc-br92h May 9 10:47:05.878: INFO: Got endpoints: latency-svc-br92h [1.528931149s] May 9 10:47:05.931: INFO: Created: latency-svc-64cbk May 9 10:47:05.963: INFO: Got endpoints: latency-svc-64cbk [1.335264254s] May 9 10:47:06.032: INFO: Created: latency-svc-8vttj May 9 10:47:06.036: INFO: Got endpoints: latency-svc-8vttj [158.349495ms] May 9 10:47:06.075: INFO: Created: latency-svc-jfngb May 9 10:47:06.734: INFO: Got endpoints: latency-svc-jfngb [2.034180342s] May 9 10:47:06.755: INFO: Created: latency-svc-gk9l6 May 9 10:47:06.771: INFO: Got endpoints: latency-svc-gk9l6 [1.99292003s] May 9 10:47:06.830: INFO: Created: latency-svc-hpscp May 9 10:47:06.936: INFO: Got endpoints: latency-svc-hpscp [1.905569213s] May 9 10:47:06.968: INFO: Created: latency-svc-j4z4d May 9 10:47:06.998: INFO: Got endpoints: latency-svc-j4z4d [1.893740801s] May 9 10:47:07.028: INFO: Created: latency-svc-lfr54 May 9 10:47:07.116: INFO: Got endpoints: latency-svc-lfr54 [1.95967265s] May 9 10:47:07.163: INFO: Created: latency-svc-4q9cl May 9 10:47:07.178: INFO: Got endpoints: latency-svc-4q9cl [1.924352058s] May 9 10:47:07.302: INFO: Created: latency-svc-s8mxz May 9 10:47:07.310: INFO: Got endpoints: latency-svc-s8mxz [1.962058035s] May 9 10:47:07.782: INFO: Created: latency-svc-xfbsc May 9 10:47:07.785: INFO: Got endpoints: latency-svc-xfbsc [2.321888561s] May 9 10:47:08.213: INFO: Created: latency-svc-bq8c4 May 9 10:47:08.218: INFO: Got endpoints: latency-svc-bq8c4 [2.646729917s] May 9 10:47:08.312: INFO: Created: latency-svc-tvd7k May 9 10:47:08.427: INFO: Got endpoints: latency-svc-tvd7k [2.783572188s] May 9 10:47:08.430: INFO: Created: latency-svc-wzvcv May 9 10:47:08.468: INFO: Got endpoints: latency-svc-wzvcv [2.742302228s] May 9 10:47:08.492: INFO: Created: latency-svc-p2ldf May 9 10:47:08.498: INFO: Got endpoints: latency-svc-p2ldf [2.728308573s] May 9 10:47:08.585: INFO: Created: latency-svc-d454r May 9 10:47:08.589: INFO: Got endpoints: latency-svc-d454r [2.770817473s] May 9 10:47:08.617: INFO: Created: latency-svc-j6txb May 9 10:47:08.624: INFO: Got endpoints: latency-svc-j6txb [2.66042914s] May 9 10:47:08.647: INFO: Created: latency-svc-xbdmv May 9 10:47:08.650: INFO: Got endpoints: latency-svc-xbdmv [2.613424151s] May 9 10:47:08.732: INFO: Created: latency-svc-wg62g May 9 10:47:08.751: INFO: Got endpoints: latency-svc-wg62g [2.016861318s] May 9 10:47:08.955: INFO: Created: latency-svc-pt6t6 May 9 10:47:09.452: INFO: Got endpoints: latency-svc-pt6t6 [2.680089329s] May 9 10:47:09.650: INFO: Created: latency-svc-9fwkd May 9 10:47:09.686: INFO: Got endpoints: latency-svc-9fwkd [2.74952526s] May 9 10:47:09.841: INFO: Created: latency-svc-8lpdz May 9 10:47:09.866: INFO: Got endpoints: latency-svc-8lpdz [2.868318525s] May 9 10:47:10.057: INFO: Created: latency-svc-m87zp May 9 10:47:10.100: INFO: Got endpoints: latency-svc-m87zp [2.983666362s] May 9 10:47:10.263: INFO: Created: latency-svc-q5nht May 9 10:47:10.264: INFO: Got endpoints: latency-svc-q5nht [3.085936166s] May 9 10:47:10.334: INFO: Created: latency-svc-tg79n May 9 10:47:10.346: INFO: Got endpoints: latency-svc-tg79n [3.035866729s] May 9 10:47:10.412: INFO: Created: latency-svc-s5btk May 9 10:47:10.424: INFO: Got endpoints: latency-svc-s5btk [2.638480707s] May 9 10:47:10.485: INFO: Created: latency-svc-dd5jz May 9 10:47:10.497: INFO: Got endpoints: latency-svc-dd5jz [2.279194405s] May 9 10:47:10.565: INFO: Created: latency-svc-hqn7m May 9 10:47:10.574: INFO: Got endpoints: latency-svc-hqn7m [2.146837594s] May 9 10:47:10.634: INFO: Created: latency-svc-xkf7z May 9 10:47:10.647: INFO: Got endpoints: latency-svc-xkf7z [2.179231042s] May 9 10:47:10.715: INFO: Created: latency-svc-tncq4 May 9 10:47:10.719: INFO: Got endpoints: latency-svc-tncq4 [2.220797686s] May 9 10:47:10.742: INFO: Created: latency-svc-jswq7 May 9 10:47:10.771: INFO: Got endpoints: latency-svc-jswq7 [2.182445793s] May 9 10:47:10.804: INFO: Created: latency-svc-csr82 May 9 10:47:10.865: INFO: Got endpoints: latency-svc-csr82 [2.240687889s] May 9 10:47:10.879: INFO: Created: latency-svc-5x6jz May 9 10:47:10.928: INFO: Got endpoints: latency-svc-5x6jz [2.277888408s] May 9 10:47:11.020: INFO: Created: latency-svc-4pkmp May 9 10:47:11.026: INFO: Got endpoints: latency-svc-4pkmp [2.275280525s] May 9 10:47:11.065: INFO: Created: latency-svc-twpb7 May 9 10:47:11.092: INFO: Got endpoints: latency-svc-twpb7 [1.640791346s] May 9 10:47:11.176: INFO: Created: latency-svc-g4lsw May 9 10:47:11.219: INFO: Got endpoints: latency-svc-g4lsw [1.532697114s] May 9 10:47:11.245: INFO: Created: latency-svc-7qls9 May 9 10:47:11.314: INFO: Got endpoints: latency-svc-7qls9 [1.448301146s] May 9 10:47:11.359: INFO: Created: latency-svc-mrtvt May 9 10:47:11.375: INFO: Got endpoints: latency-svc-mrtvt [1.274839545s] May 9 10:47:11.395: INFO: Created: latency-svc-q8jqf May 9 10:47:11.521: INFO: Created: latency-svc-dzqqc May 9 10:47:11.540: INFO: Got endpoints: latency-svc-q8jqf [1.275935985s] May 9 10:47:11.549: INFO: Got endpoints: latency-svc-dzqqc [1.20333892s] May 9 10:47:11.575: INFO: Created: latency-svc-zls45 May 9 10:47:11.661: INFO: Got endpoints: latency-svc-zls45 [1.237107282s] May 9 10:47:11.725: INFO: Created: latency-svc-bc722 May 9 10:47:11.749: INFO: Got endpoints: latency-svc-bc722 [1.252256844s] May 9 10:47:11.855: INFO: Created: latency-svc-trlqw May 9 10:47:11.893: INFO: Got endpoints: latency-svc-trlqw [1.318529102s] May 9 10:47:11.941: INFO: Created: latency-svc-85f7m May 9 10:47:12.032: INFO: Got endpoints: latency-svc-85f7m [1.385075086s] May 9 10:47:12.035: INFO: Created: latency-svc-rkqnh May 9 10:47:12.042: INFO: Got endpoints: latency-svc-rkqnh [1.323136403s] May 9 10:47:12.079: INFO: Created: latency-svc-nstcn May 9 10:47:12.102: INFO: Got endpoints: latency-svc-nstcn [1.331047915s] May 9 10:47:12.121: INFO: Created: latency-svc-9gwt6 May 9 10:47:12.170: INFO: Got endpoints: latency-svc-9gwt6 [1.304903196s] May 9 10:47:12.179: INFO: Created: latency-svc-snbp6 May 9 10:47:12.187: INFO: Got endpoints: latency-svc-snbp6 [1.258881709s] May 9 10:47:12.211: INFO: Created: latency-svc-5scvw May 9 10:47:12.241: INFO: Got endpoints: latency-svc-5scvw [1.215143862s] May 9 10:47:12.326: INFO: Created: latency-svc-qvd7n May 9 10:47:12.331: INFO: Got endpoints: latency-svc-qvd7n [1.23872514s] May 9 10:47:12.367: INFO: Created: latency-svc-x7bwx May 9 10:47:12.380: INFO: Got endpoints: latency-svc-x7bwx [1.160976337s] May 9 10:47:12.414: INFO: Created: latency-svc-f7jsg May 9 10:47:12.481: INFO: Got endpoints: latency-svc-f7jsg [1.16654087s] May 9 10:47:12.528: INFO: Created: latency-svc-89k9c May 9 10:47:12.558: INFO: Got endpoints: latency-svc-89k9c [1.183631269s] May 9 10:47:12.626: INFO: Created: latency-svc-kgt5m May 9 10:47:12.660: INFO: Got endpoints: latency-svc-kgt5m [1.120380456s] May 9 10:47:12.692: INFO: Created: latency-svc-rxjhq May 9 10:47:12.704: INFO: Got endpoints: latency-svc-rxjhq [1.155121906s] May 9 10:47:12.785: INFO: Created: latency-svc-tdqvn May 9 10:47:12.785: INFO: Got endpoints: latency-svc-tdqvn [1.124224302s] May 9 10:47:12.811: INFO: Created: latency-svc-zbxb6 May 9 10:47:12.825: INFO: Got endpoints: latency-svc-zbxb6 [1.075904912s] May 9 10:47:12.854: INFO: Created: latency-svc-pvljw May 9 10:47:12.867: INFO: Got endpoints: latency-svc-pvljw [974.18895ms] May 9 10:47:12.918: INFO: Created: latency-svc-6wk24 May 9 10:47:12.935: INFO: Got endpoints: latency-svc-6wk24 [902.48817ms] May 9 10:47:12.967: INFO: Created: latency-svc-qrjkd May 9 10:47:12.976: INFO: Got endpoints: latency-svc-qrjkd [934.078235ms] May 9 10:47:13.014: INFO: Created: latency-svc-r9nbq May 9 10:47:13.074: INFO: Got endpoints: latency-svc-r9nbq [971.433471ms] May 9 10:47:13.087: INFO: Created: latency-svc-2q9m8 May 9 10:47:13.102: INFO: Got endpoints: latency-svc-2q9m8 [932.662425ms] May 9 10:47:13.138: INFO: Created: latency-svc-9z7vd May 9 10:47:13.151: INFO: Got endpoints: latency-svc-9z7vd [964.001125ms] May 9 10:47:13.170: INFO: Created: latency-svc-zft8x May 9 10:47:13.212: INFO: Got endpoints: latency-svc-zft8x [970.379764ms] May 9 10:47:13.225: INFO: Created: latency-svc-2tbm6 May 9 10:47:13.248: INFO: Got endpoints: latency-svc-2tbm6 [916.942543ms] May 9 10:47:13.279: INFO: Created: latency-svc-fr6vz May 9 10:47:13.296: INFO: Got endpoints: latency-svc-fr6vz [916.351493ms] May 9 10:47:13.374: INFO: Created: latency-svc-mmp9c May 9 10:47:13.380: INFO: Got endpoints: latency-svc-mmp9c [898.65294ms] May 9 10:47:13.404: INFO: Created: latency-svc-s627h May 9 10:47:13.423: INFO: Got endpoints: latency-svc-s627h [864.567058ms] May 9 10:47:13.440: INFO: Created: latency-svc-2xlr9 May 9 10:47:13.453: INFO: Got endpoints: latency-svc-2xlr9 [792.928077ms] May 9 10:47:13.525: INFO: Created: latency-svc-mtqd2 May 9 10:47:13.530: INFO: Got endpoints: latency-svc-mtqd2 [825.966389ms] May 9 10:47:13.561: INFO: Created: latency-svc-gzr5q May 9 10:47:13.572: INFO: Got endpoints: latency-svc-gzr5q [787.022101ms] May 9 10:47:13.610: INFO: Created: latency-svc-zfw6q May 9 10:47:13.697: INFO: Got endpoints: latency-svc-zfw6q [871.58097ms] May 9 10:47:13.699: INFO: Created: latency-svc-dg6hj May 9 10:47:13.705: INFO: Got endpoints: latency-svc-dg6hj [838.231179ms] May 9 10:47:13.759: INFO: Created: latency-svc-d7x2m May 9 10:47:13.778: INFO: Got endpoints: latency-svc-d7x2m [843.898531ms] May 9 10:47:13.872: INFO: Created: latency-svc-xmljb May 9 10:47:13.874: INFO: Got endpoints: latency-svc-xmljb [897.997129ms] May 9 10:47:13.952: INFO: Created: latency-svc-x5phm May 9 10:47:14.016: INFO: Got endpoints: latency-svc-x5phm [941.69204ms] May 9 10:47:14.053: INFO: Created: latency-svc-tmh5l May 9 10:47:14.078: INFO: Got endpoints: latency-svc-tmh5l [975.969745ms] May 9 10:47:14.101: INFO: Created: latency-svc-7zk8p May 9 10:47:14.114: INFO: Got endpoints: latency-svc-7zk8p [963.745077ms] May 9 10:47:14.172: INFO: Created: latency-svc-xb2rh May 9 10:47:14.175: INFO: Got endpoints: latency-svc-xb2rh [962.860412ms] May 9 10:47:14.197: INFO: Created: latency-svc-768gw May 9 10:47:14.211: INFO: Got endpoints: latency-svc-768gw [962.9697ms] May 9 10:47:14.251: INFO: Created: latency-svc-5g7zl May 9 10:47:14.265: INFO: Got endpoints: latency-svc-5g7zl [969.413122ms] May 9 10:47:14.320: INFO: Created: latency-svc-z28jp May 9 10:47:14.326: INFO: Got endpoints: latency-svc-z28jp [945.883475ms] May 9 10:47:14.365: INFO: Created: latency-svc-7s5dn May 9 10:47:14.395: INFO: Got endpoints: latency-svc-7s5dn [971.939564ms] May 9 10:47:14.482: INFO: Created: latency-svc-mn7rs May 9 10:47:14.485: INFO: Got endpoints: latency-svc-mn7rs [1.031055932s] May 9 10:47:14.521: INFO: Created: latency-svc-chrkg May 9 10:47:14.549: INFO: Got endpoints: latency-svc-chrkg [1.018459857s] May 9 10:47:14.667: INFO: Created: latency-svc-lqlxw May 9 10:47:14.671: INFO: Got endpoints: latency-svc-lqlxw [1.098310191s] May 9 10:47:14.763: INFO: Created: latency-svc-m8lbv May 9 10:47:14.858: INFO: Got endpoints: latency-svc-m8lbv [1.161336523s] May 9 10:47:14.861: INFO: Created: latency-svc-dnns6 May 9 10:47:14.892: INFO: Got endpoints: latency-svc-dnns6 [1.186954159s] May 9 10:47:14.929: INFO: Created: latency-svc-zstmq May 9 10:47:14.957: INFO: Got endpoints: latency-svc-zstmq [1.178681206s] May 9 10:47:15.021: INFO: Created: latency-svc-ttdpz May 9 10:47:15.029: INFO: Got endpoints: latency-svc-ttdpz [1.155163524s] May 9 10:47:15.066: INFO: Created: latency-svc-zxmx8 May 9 10:47:15.206: INFO: Got endpoints: latency-svc-zxmx8 [1.190166604s] May 9 10:47:15.213: INFO: Created: latency-svc-vbrc5 May 9 10:47:15.228: INFO: Got endpoints: latency-svc-vbrc5 [1.149224022s] May 9 10:47:15.253: INFO: Created: latency-svc-dbjz5 May 9 10:47:15.264: INFO: Got endpoints: latency-svc-dbjz5 [1.149990286s] May 9 10:47:15.283: INFO: Created: latency-svc-k79pr May 9 10:47:15.301: INFO: Got endpoints: latency-svc-k79pr [1.125836744s] May 9 10:47:15.346: INFO: Created: latency-svc-bjnjm May 9 10:47:15.347: INFO: Got endpoints: latency-svc-bjnjm [1.135317891s] May 9 10:47:15.379: INFO: Created: latency-svc-kv9p4 May 9 10:47:15.421: INFO: Got endpoints: latency-svc-kv9p4 [1.155370201s] May 9 10:47:15.502: INFO: Created: latency-svc-v7jkf May 9 10:47:15.505: INFO: Got endpoints: latency-svc-v7jkf [1.179243166s] May 9 10:47:15.528: INFO: Created: latency-svc-gbnpw May 9 10:47:15.542: INFO: Got endpoints: latency-svc-gbnpw [1.146568339s] May 9 10:47:15.565: INFO: Created: latency-svc-j7tzf May 9 10:47:15.578: INFO: Got endpoints: latency-svc-j7tzf [1.093232317s] May 9 10:47:15.638: INFO: Created: latency-svc-nlhtc May 9 10:47:15.640: INFO: Got endpoints: latency-svc-nlhtc [1.091358558s] May 9 10:47:15.691: INFO: Created: latency-svc-wjw75 May 9 10:47:15.704: INFO: Got endpoints: latency-svc-wjw75 [1.033496015s] May 9 10:47:15.727: INFO: Created: latency-svc-mjvkc May 9 10:47:15.786: INFO: Got endpoints: latency-svc-mjvkc [928.01297ms] May 9 10:47:15.841: INFO: Created: latency-svc-555hw May 9 10:47:16.058: INFO: Got endpoints: latency-svc-555hw [1.165754208s] May 9 10:47:16.060: INFO: Created: latency-svc-x8fpj May 9 10:47:16.088: INFO: Got endpoints: latency-svc-x8fpj [1.130664281s] May 9 10:47:16.260: INFO: Created: latency-svc-pz5g9 May 9 10:47:16.263: INFO: Got endpoints: latency-svc-pz5g9 [1.23348444s] May 9 10:47:16.471: INFO: Created: latency-svc-6zv4q May 9 10:47:16.500: INFO: Got endpoints: latency-svc-6zv4q [1.294339161s] May 9 10:47:16.548: INFO: Created: latency-svc-dkzjd May 9 10:47:16.556: INFO: Got endpoints: latency-svc-dkzjd [1.328045457s] May 9 10:47:16.656: INFO: Created: latency-svc-9gsnk May 9 10:47:16.670: INFO: Got endpoints: latency-svc-9gsnk [1.40536319s] May 9 10:47:16.737: INFO: Created: latency-svc-ns4qd May 9 10:47:16.858: INFO: Got endpoints: latency-svc-ns4qd [1.557613269s] May 9 10:47:16.879: INFO: Created: latency-svc-2794v May 9 10:47:16.915: INFO: Got endpoints: latency-svc-2794v [1.567941986s] May 9 10:47:17.297: INFO: Created: latency-svc-vz7z9 May 9 10:47:17.300: INFO: Got endpoints: latency-svc-vz7z9 [1.87902904s] May 9 10:47:17.626: INFO: Created: latency-svc-rk988 May 9 10:47:17.630: INFO: Got endpoints: latency-svc-rk988 [2.125087722s] May 9 10:47:17.702: INFO: Created: latency-svc-nk5tt May 9 10:47:17.714: INFO: Got endpoints: latency-svc-nk5tt [2.17196177s] May 9 10:47:17.815: INFO: Created: latency-svc-kjghr May 9 10:47:17.837: INFO: Got endpoints: latency-svc-kjghr [2.259015137s] May 9 10:47:17.887: INFO: Created: latency-svc-rtstc May 9 10:47:17.973: INFO: Got endpoints: latency-svc-rtstc [2.332124681s] May 9 10:47:17.975: INFO: Created: latency-svc-wl9ks May 9 10:47:18.013: INFO: Got endpoints: latency-svc-wl9ks [2.309043439s] May 9 10:47:18.061: INFO: Created: latency-svc-gdxbp May 9 10:47:18.110: INFO: Got endpoints: latency-svc-gdxbp [2.323507059s] May 9 10:47:18.115: INFO: Created: latency-svc-rnstf May 9 10:47:18.142: INFO: Got endpoints: latency-svc-rnstf [2.083800351s] May 9 10:47:18.193: INFO: Created: latency-svc-tzr6g May 9 10:47:18.207: INFO: Got endpoints: latency-svc-tzr6g [2.118739159s] May 9 10:47:18.266: INFO: Created: latency-svc-9k68n May 9 10:47:18.349: INFO: Got endpoints: latency-svc-9k68n [2.086350496s] May 9 10:47:18.458: INFO: Created: latency-svc-lsct8 May 9 10:47:18.477: INFO: Got endpoints: latency-svc-lsct8 [1.97670919s] May 9 10:47:18.655: INFO: Created: latency-svc-6lwlz May 9 10:47:18.658: INFO: Got endpoints: latency-svc-6lwlz [2.101750904s] May 9 10:47:18.704: INFO: Created: latency-svc-sq9vq May 9 10:47:18.717: INFO: Got endpoints: latency-svc-sq9vq [2.047411209s] May 9 10:47:18.745: INFO: Created: latency-svc-6tgn6 May 9 10:47:18.787: INFO: Got endpoints: latency-svc-6tgn6 [1.928052977s] May 9 10:47:18.814: INFO: Created: latency-svc-6d9rs May 9 10:47:18.879: INFO: Got endpoints: latency-svc-6d9rs [1.964581363s] May 9 10:47:18.942: INFO: Created: latency-svc-szjtq May 9 10:47:18.946: INFO: Got endpoints: latency-svc-szjtq [1.645691431s] May 9 10:47:19.016: INFO: Created: latency-svc-ksrrw May 9 10:47:19.024: INFO: Got endpoints: latency-svc-ksrrw [1.393692655s] May 9 10:47:19.081: INFO: Created: latency-svc-rxtk7 May 9 10:47:19.135: INFO: Created: latency-svc-86lmv May 9 10:47:19.135: INFO: Got endpoints: latency-svc-rxtk7 [1.421486557s] May 9 10:47:19.150: INFO: Got endpoints: latency-svc-86lmv [1.313371193s] May 9 10:47:19.242: INFO: Created: latency-svc-85j5z May 9 10:47:19.255: INFO: Got endpoints: latency-svc-85j5z [1.282394422s] May 9 10:47:19.297: INFO: Created: latency-svc-5l5bk May 9 10:47:19.306: INFO: Got endpoints: latency-svc-5l5bk [1.293068544s] May 9 10:47:19.393: INFO: Created: latency-svc-8lcgf May 9 10:47:19.395: INFO: Got endpoints: latency-svc-8lcgf [1.284573166s] May 9 10:47:19.417: INFO: Created: latency-svc-gbpc7 May 9 10:47:19.433: INFO: Got endpoints: latency-svc-gbpc7 [1.290942164s] May 9 10:47:19.561: INFO: Created: latency-svc-b8n6v May 9 10:47:19.578: INFO: Got endpoints: latency-svc-b8n6v [1.370891387s] May 9 10:47:19.771: INFO: Created: latency-svc-qnvjz May 9 10:47:19.805: INFO: Got endpoints: latency-svc-qnvjz [1.456015685s] May 9 10:47:19.964: INFO: Created: latency-svc-mkjmj May 9 10:47:19.985: INFO: Got endpoints: latency-svc-mkjmj [1.508297508s] May 9 10:47:20.042: INFO: Created: latency-svc-htkqs May 9 10:47:20.134: INFO: Got endpoints: latency-svc-htkqs [1.476242752s] May 9 10:47:20.174: INFO: Created: latency-svc-vgr2r May 9 10:47:20.190: INFO: Got endpoints: latency-svc-vgr2r [1.472815354s] May 9 10:47:20.216: INFO: Created: latency-svc-85rpw May 9 10:47:20.232: INFO: Got endpoints: latency-svc-85rpw [1.445132747s] May 9 10:47:20.286: INFO: Created: latency-svc-jj5zg May 9 10:47:20.318: INFO: Got endpoints: latency-svc-jj5zg [1.438606502s] May 9 10:47:20.319: INFO: Created: latency-svc-8px2h May 9 10:47:20.328: INFO: Got endpoints: latency-svc-8px2h [1.382532425s] May 9 10:47:20.348: INFO: Created: latency-svc-mm8t8 May 9 10:47:20.359: INFO: Got endpoints: latency-svc-mm8t8 [1.334593603s] May 9 10:47:20.378: INFO: Created: latency-svc-4lhzw May 9 10:47:20.452: INFO: Got endpoints: latency-svc-4lhzw [1.316659626s] May 9 10:47:20.468: INFO: Created: latency-svc-rpjcq May 9 10:47:20.491: INFO: Got endpoints: latency-svc-rpjcq [1.34065513s] May 9 10:47:20.516: INFO: Created: latency-svc-m6bwz May 9 10:47:20.602: INFO: Got endpoints: latency-svc-m6bwz [1.347048089s] May 9 10:47:20.612: INFO: Created: latency-svc-26bpk May 9 10:47:20.630: INFO: Got endpoints: latency-svc-26bpk [1.323268986s] May 9 10:47:20.630: INFO: Latencies: [89.142136ms 155.003421ms 158.349495ms 203.169722ms 245.522894ms 299.329416ms 369.324919ms 484.554473ms 579.522747ms 627.771834ms 663.458666ms 716.228955ms 753.673455ms 787.022101ms 789.65509ms 792.928077ms 825.966389ms 838.231179ms 843.898531ms 864.567058ms 867.844623ms 871.58097ms 897.997129ms 898.65294ms 902.48817ms 911.858456ms 915.731781ms 916.351493ms 916.942543ms 918.91126ms 924.788534ms 928.01297ms 932.662425ms 934.078235ms 938.107887ms 941.69204ms 945.883475ms 954.801196ms 962.860412ms 962.9697ms 963.745077ms 964.001125ms 969.413122ms 970.379764ms 971.433471ms 971.939564ms 974.18895ms 975.969745ms 1.00866089s 1.018459857s 1.019830296s 1.019886395s 1.02509745s 1.030007122s 1.031055932s 1.033496015s 1.071802325s 1.073661199s 1.075904912s 1.08539624s 1.091358558s 1.093232317s 1.098310191s 1.110509959s 1.120380456s 1.124224302s 1.125836744s 1.130664281s 1.131283454s 1.135317891s 1.143518713s 1.146568339s 1.148650906s 1.149224022s 1.149990286s 1.155121906s 1.155163524s 1.155370201s 1.15602477s 1.156593453s 1.157383852s 1.158931911s 1.160976337s 1.161336523s 1.165754208s 1.16654087s 1.172588505s 1.178681206s 1.179243166s 1.183631269s 1.186954159s 1.190166604s 1.191508179s 1.202201791s 1.20333892s 1.214423032s 1.215143862s 1.23348444s 1.237107282s 1.23872514s 1.252256844s 1.258881709s 1.274839545s 1.275935985s 1.282394422s 1.284573166s 1.290942164s 1.293068544s 1.294339161s 1.304903196s 1.313371193s 1.316659626s 1.318529102s 1.323136403s 1.323268986s 1.328045457s 1.331047915s 1.334593603s 1.335264254s 1.34065513s 1.347048089s 1.364443303s 1.370891387s 1.373379341s 1.382532425s 1.385075086s 1.393692655s 1.394663437s 1.40536319s 1.421486557s 1.438606502s 1.445132747s 1.448301146s 1.456015685s 1.472815354s 1.476242752s 1.508297508s 1.528931149s 1.532697114s 1.557613269s 1.567941986s 1.61018605s 1.629699878s 1.633562998s 1.640791346s 1.645691431s 1.694382236s 1.705856931s 1.707306103s 1.7214533s 1.777371681s 1.781709573s 1.784397212s 1.789726423s 1.87902904s 1.893740801s 1.905569213s 1.924352058s 1.928052977s 1.95967265s 1.962058035s 1.964581363s 1.97670919s 1.99292003s 2.016861318s 2.034180342s 2.047411209s 2.083800351s 2.086350496s 2.101750904s 2.118739159s 2.125087722s 2.146837594s 2.17196177s 2.179231042s 2.182445793s 2.220797686s 2.240687889s 2.259015137s 2.275280525s 2.277888408s 2.279194405s 2.309043439s 2.321888561s 2.323507059s 2.332124681s 2.613424151s 2.638480707s 2.646729917s 2.66042914s 2.680089329s 2.728308573s 2.742302228s 2.74952526s 2.770817473s 2.783572188s 2.868318525s 2.983666362s 3.035866729s 3.085936166s] May 9 10:47:20.630: INFO: 50 %ile: 1.252256844s May 9 10:47:20.630: INFO: 90 %ile: 2.277888408s May 9 10:47:20.630: INFO: 99 %ile: 3.035866729s May 9 10:47:20.630: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:47:20.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-dt6dn" for this suite. May 9 10:47:54.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:47:54.701: INFO: namespace: e2e-tests-svc-latency-dt6dn, resource: bindings, ignored listing per whitelist May 9 10:47:54.741: INFO: namespace e2e-tests-svc-latency-dt6dn deletion completed in 34.105609692s • [SLOW TEST:57.782 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:47:54.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 9 10:47:54.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-trgf5' May 9 10:47:58.475: INFO: stderr: "" May 9 10:47:58.475: INFO: stdout: "pod/pause created\n" May 9 10:47:58.475: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 9 10:47:58.475: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-trgf5" to be "running and ready" May 9 10:47:58.488: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 13.22603ms May 9 10:48:00.493: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018113258s May 9 10:48:02.498: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.022858827s May 9 10:48:02.498: INFO: Pod "pause" satisfied condition "running and ready" May 9 10:48:02.498: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 9 10:48:02.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-trgf5' May 9 10:48:02.616: INFO: stderr: "" May 9 10:48:02.616: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 9 10:48:02.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-trgf5' May 9 10:48:02.706: INFO: stderr: "" May 9 10:48:02.706: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 9 10:48:02.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-trgf5' May 9 10:48:02.803: INFO: stderr: "" May 9 10:48:02.803: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 9 10:48:02.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-trgf5' May 9 10:48:02.892: INFO: stderr: "" May 9 10:48:02.892: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 9 10:48:02.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-trgf5' May 9 10:48:03.085: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 10:48:03.085: INFO: stdout: "pod \"pause\" force deleted\n" May 9 10:48:03.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-trgf5' May 9 10:48:03.190: INFO: stderr: "No resources found.\n" May 9 10:48:03.190: INFO: stdout: "" May 9 10:48:03.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-trgf5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 9 10:48:03.279: INFO: stderr: "" May 9 10:48:03.279: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:48:03.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-trgf5" for this suite. May 9 10:48:09.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:48:09.379: INFO: namespace: e2e-tests-kubectl-trgf5, resource: bindings, ignored listing per whitelist May 9 10:48:09.391: INFO: namespace e2e-tests-kubectl-trgf5 deletion completed in 6.108930514s • [SLOW TEST:14.649 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:48:09.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 9 10:48:13.514: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-92826cd4-91e2-11ea-a20c-0242ac110018", GenerateName:"", Namespace:"e2e-tests-pods-d66jp", SelfLink:"/api/v1/namespaces/e2e-tests-pods-d66jp/pods/pod-submit-remove-92826cd4-91e2-11ea-a20c-0242ac110018", UID:"928389d6-91e2-11ea-99e8-0242ac110002", ResourceVersion:"9572154", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724618089, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"487907795"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nllzr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001c76b00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nllzr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001c51528), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c4d020), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001c51570)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001c51590)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001c51598), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001c5159c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724618089, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724618093, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724618093, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724618089, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.6", StartTime:(*v1.Time)(0xc000ba2140), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000ba2160), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://a8eb5438b4126321972dbf245d8b893ba3090e8d1b341abffd60cc8fb1c88fe5"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 9 10:48:18.527: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:48:18.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-d66jp" for this suite. May 9 10:48:24.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:48:24.631: INFO: namespace: e2e-tests-pods-d66jp, resource: bindings, ignored listing per whitelist May 9 10:48:24.653: INFO: namespace e2e-tests-pods-d66jp deletion completed in 6.119577303s • [SLOW TEST:15.262 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:48:24.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 9 10:48:24.754: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 9 10:48:24.761: INFO: Waiting for terminating namespaces to be deleted... May 9 10:48:24.764: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 9 10:48:24.769: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 9 10:48:24.769: INFO: Container kube-proxy ready: true, restart count 0 May 9 10:48:24.769: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 9 10:48:24.769: INFO: Container kindnet-cni ready: true, restart count 0 May 9 10:48:24.769: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 9 10:48:24.769: INFO: Container coredns ready: true, restart count 0 May 9 10:48:24.769: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 9 10:48:24.823: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 9 10:48:24.823: INFO: Container kindnet-cni ready: true, restart count 0 May 9 10:48:24.823: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 9 10:48:24.823: INFO: Container coredns ready: true, restart count 0 May 9 10:48:24.823: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 9 10:48:24.823: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 9 10:48:24.901: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 9 10:48:24.901: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 9 10:48:24.901: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 9 10:48:24.901: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 9 10:48:24.901: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 9 10:48:24.901: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-9bb26697-91e2-11ea-a20c-0242ac110018.160d5675366d4132], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-qdwc5/filler-pod-9bb26697-91e2-11ea-a20c-0242ac110018 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-9bb26697-91e2-11ea-a20c-0242ac110018.160d567591312769], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9bb26697-91e2-11ea-a20c-0242ac110018.160d5675e4eafbcc], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-9bb26697-91e2-11ea-a20c-0242ac110018.160d5675fa123294], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-9bb33807-91e2-11ea-a20c-0242ac110018.160d56753723d157], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-qdwc5/filler-pod-9bb33807-91e2-11ea-a20c-0242ac110018 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-9bb33807-91e2-11ea-a20c-0242ac110018.160d56759843f8cb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9bb33807-91e2-11ea-a20c-0242ac110018.160d5675f3fade07], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-9bb33807-91e2-11ea-a20c-0242ac110018.160d567604cbe55a], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.160d56769e244581], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:48:32.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-qdwc5" for this suite. May 9 10:48:38.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:48:38.474: INFO: namespace: e2e-tests-sched-pred-qdwc5, resource: bindings, ignored listing per whitelist May 9 10:48:38.506: INFO: namespace e2e-tests-sched-pred-qdwc5 deletion completed in 6.399650592s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:13.853 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:48:38.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 9 10:48:38.875: INFO: Waiting up to 5m0s for pod "pod-a4058679-91e2-11ea-a20c-0242ac110018" in namespace "e2e-tests-emptydir-qgzbq" to be "success or failure" May 9 10:48:38.885: INFO: Pod "pod-a4058679-91e2-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.396973ms May 9 10:48:40.888: INFO: Pod "pod-a4058679-91e2-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013324071s May 9 10:48:43.118: INFO: Pod "pod-a4058679-91e2-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243181367s May 9 10:48:45.123: INFO: Pod "pod-a4058679-91e2-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.247513813s STEP: Saw pod success May 9 10:48:45.123: INFO: Pod "pod-a4058679-91e2-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 10:48:45.126: INFO: Trying to get logs from node hunter-worker2 pod pod-a4058679-91e2-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 10:48:45.169: INFO: Waiting for pod pod-a4058679-91e2-11ea-a20c-0242ac110018 to disappear May 9 10:48:45.220: INFO: Pod pod-a4058679-91e2-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:48:45.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qgzbq" for this suite. May 9 10:48:51.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:48:51.288: INFO: namespace: e2e-tests-emptydir-qgzbq, resource: bindings, ignored listing per whitelist May 9 10:48:51.347: INFO: namespace e2e-tests-emptydir-qgzbq deletion completed in 6.097364623s • [SLOW TEST:12.841 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:48:51.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 10:48:51.485: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab84e6d4-91e2-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-n92fq" to be "success or failure" May 9 10:48:51.495: INFO: Pod "downwardapi-volume-ab84e6d4-91e2-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.398492ms May 9 10:48:53.531: INFO: Pod "downwardapi-volume-ab84e6d4-91e2-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045822975s May 9 10:48:55.535: INFO: Pod "downwardapi-volume-ab84e6d4-91e2-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050196504s STEP: Saw pod success May 9 10:48:55.535: INFO: Pod "downwardapi-volume-ab84e6d4-91e2-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 10:48:55.538: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ab84e6d4-91e2-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 10:48:55.568: INFO: Waiting for pod downwardapi-volume-ab84e6d4-91e2-11ea-a20c-0242ac110018 to disappear May 9 10:48:55.626: INFO: Pod downwardapi-volume-ab84e6d4-91e2-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:48:55.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n92fq" for this suite. May 9 10:49:01.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:49:01.681: INFO: namespace: e2e-tests-projected-n92fq, resource: bindings, ignored listing per whitelist May 9 10:49:01.725: INFO: namespace e2e-tests-projected-n92fq deletion completed in 6.095558097s • [SLOW TEST:10.378 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:49:01.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 9 10:49:01.857: INFO: Waiting up to 5m0s for pod "pod-b1b79a01-91e2-11ea-a20c-0242ac110018" in namespace "e2e-tests-emptydir-6grs2" to be "success or failure" May 9 10:49:01.861: INFO: Pod "pod-b1b79a01-91e2-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371386ms May 9 10:49:03.865: INFO: Pod "pod-b1b79a01-91e2-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007750819s May 9 10:49:05.868: INFO: Pod "pod-b1b79a01-91e2-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01147213s STEP: Saw pod success May 9 10:49:05.868: INFO: Pod "pod-b1b79a01-91e2-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 10:49:05.871: INFO: Trying to get logs from node hunter-worker2 pod pod-b1b79a01-91e2-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 10:49:05.988: INFO: Waiting for pod pod-b1b79a01-91e2-11ea-a20c-0242ac110018 to disappear May 9 10:49:05.999: INFO: Pod pod-b1b79a01-91e2-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:49:05.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6grs2" for this suite. May 9 10:49:12.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:49:12.123: INFO: namespace: e2e-tests-emptydir-6grs2, resource: bindings, ignored listing per whitelist May 9 10:49:12.154: INFO: namespace e2e-tests-emptydir-6grs2 deletion completed in 6.151824219s • [SLOW TEST:10.429 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:49:12.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 9 10:49:12.978: INFO: Waiting up to 5m0s for pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-lqxs5" in namespace "e2e-tests-svcaccounts-gtchz" to be "success or failure" May 9 10:49:13.002: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-lqxs5": Phase="Pending", Reason="", readiness=false. Elapsed: 23.836331ms May 9 10:49:15.006: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-lqxs5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027766532s May 9 10:49:17.010: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-lqxs5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031498845s May 9 10:49:19.014: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-lqxs5": Phase="Running", Reason="", readiness=false. Elapsed: 6.035945523s May 9 10:49:21.018: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-lqxs5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039367917s STEP: Saw pod success May 9 10:49:21.018: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-lqxs5" satisfied condition "success or failure" May 9 10:49:21.020: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-lqxs5 container token-test: STEP: delete the pod May 9 10:49:21.060: INFO: Waiting for pod pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-lqxs5 to disappear May 9 10:49:21.065: INFO: Pod pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-lqxs5 no longer exists STEP: Creating a pod to test consume service account root CA May 9 10:49:21.068: INFO: Waiting up to 5m0s for pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-m9rm5" in namespace "e2e-tests-svcaccounts-gtchz" to be "success or failure" May 9 10:49:21.082: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-m9rm5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.070752ms May 9 10:49:23.087: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-m9rm5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01857522s May 9 10:49:25.091: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-m9rm5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02256611s May 9 10:49:27.095: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-m9rm5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026564433s May 9 10:49:29.099: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-m9rm5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.030512899s STEP: Saw pod success May 9 10:49:29.099: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-m9rm5" satisfied condition "success or failure" May 9 10:49:29.102: INFO: Trying to get logs from node hunter-worker pod pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-m9rm5 container root-ca-test: STEP: delete the pod May 9 10:49:29.146: INFO: Waiting for pod pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-m9rm5 to disappear May 9 10:49:29.155: INFO: Pod pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-m9rm5 no longer exists STEP: Creating a pod to test consume service account namespace May 9 10:49:29.211: INFO: Waiting up to 5m0s for pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-xqpf2" in namespace "e2e-tests-svcaccounts-gtchz" to be "success or failure" May 9 10:49:29.221: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-xqpf2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.629981ms May 9 10:49:31.225: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-xqpf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014518077s May 9 10:49:33.229: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-xqpf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018152169s May 9 10:49:35.232: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-xqpf2": Phase="Running", Reason="", readiness=false. Elapsed: 6.021255054s May 9 10:49:37.236: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-xqpf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02574936s STEP: Saw pod success May 9 10:49:37.237: INFO: Pod "pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-xqpf2" satisfied condition "success or failure" May 9 10:49:37.240: INFO: Trying to get logs from node hunter-worker pod pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-xqpf2 container namespace-test: STEP: delete the pod May 9 10:49:37.288: INFO: Waiting for pod pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-xqpf2 to disappear May 9 10:49:37.299: INFO: Pod pod-service-account-b8594f4a-91e2-11ea-a20c-0242ac110018-xqpf2 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:49:37.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-gtchz" for this suite. May 9 10:49:43.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:49:43.412: INFO: namespace: e2e-tests-svcaccounts-gtchz, resource: bindings, ignored listing per whitelist May 9 10:49:43.425: INFO: namespace e2e-tests-svcaccounts-gtchz deletion completed in 6.096606191s • [SLOW TEST:31.271 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:49:43.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0509 10:49:55.822527 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 9 10:49:55.822: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:49:55.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-d5wds" for this suite. May 9 10:50:03.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:50:03.882: INFO: namespace: e2e-tests-gc-d5wds, resource: bindings, ignored listing per whitelist May 9 10:50:03.916: INFO: namespace e2e-tests-gc-d5wds deletion completed in 8.090084913s • [SLOW TEST:20.491 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:50:03.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 9 10:50:08.519: INFO: Successfully updated pod "pod-update-d6c115f1-91e2-11ea-a20c-0242ac110018" STEP: verifying the updated pod is in kubernetes May 9 10:50:08.566: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:50:08.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-4wfdt" for this suite. May 9 10:50:30.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:50:30.639: INFO: namespace: e2e-tests-pods-4wfdt, resource: bindings, ignored listing per whitelist May 9 10:50:30.685: INFO: namespace e2e-tests-pods-4wfdt deletion completed in 22.113210244s • [SLOW TEST:26.769 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:50:30.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-e6be915b-91e2-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume secrets May 9 10:50:30.830: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e6c05637-91e2-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-nvm5j" to be "success or failure" May 9 10:50:30.834: INFO: Pod "pod-projected-secrets-e6c05637-91e2-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.880536ms May 9 10:50:32.838: INFO: Pod "pod-projected-secrets-e6c05637-91e2-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00840119s May 9 10:50:34.842: INFO: Pod "pod-projected-secrets-e6c05637-91e2-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012023608s STEP: Saw pod success May 9 10:50:34.842: INFO: Pod "pod-projected-secrets-e6c05637-91e2-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 10:50:34.844: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-e6c05637-91e2-11ea-a20c-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 9 10:50:34.865: INFO: Waiting for pod pod-projected-secrets-e6c05637-91e2-11ea-a20c-0242ac110018 to disappear May 9 10:50:34.869: INFO: Pod pod-projected-secrets-e6c05637-91e2-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:50:34.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nvm5j" for this suite. May 9 10:50:40.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:50:40.962: INFO: namespace: e2e-tests-projected-nvm5j, resource: bindings, ignored listing per whitelist May 9 10:50:41.013: INFO: namespace e2e-tests-projected-nvm5j deletion completed in 6.140985312s • [SLOW TEST:10.328 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:50:41.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0509 10:50:42.274095 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 9 10:50:42.274: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:50:42.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-xfqdd" for this suite. May 9 10:50:48.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:50:48.340: INFO: namespace: e2e-tests-gc-xfqdd, resource: bindings, ignored listing per whitelist May 9 10:50:48.385: INFO: namespace e2e-tests-gc-xfqdd deletion completed in 6.105866015s • [SLOW TEST:7.371 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:50:48.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 10:50:48.468: INFO: Creating ReplicaSet my-hostname-basic-f144ddff-91e2-11ea-a20c-0242ac110018 May 9 10:50:48.508: INFO: Pod name my-hostname-basic-f144ddff-91e2-11ea-a20c-0242ac110018: Found 0 pods out of 1 May 9 10:50:53.512: INFO: Pod name my-hostname-basic-f144ddff-91e2-11ea-a20c-0242ac110018: Found 1 pods out of 1 May 9 10:50:53.512: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f144ddff-91e2-11ea-a20c-0242ac110018" is running May 9 10:50:53.515: INFO: Pod "my-hostname-basic-f144ddff-91e2-11ea-a20c-0242ac110018-npl8s" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 10:50:48 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 10:50:52 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 10:50:52 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 10:50:48 +0000 UTC Reason: Message:}]) May 9 10:50:53.515: INFO: Trying to dial the pod May 9 10:50:58.528: INFO: Controller my-hostname-basic-f144ddff-91e2-11ea-a20c-0242ac110018: Got expected result from replica 1 [my-hostname-basic-f144ddff-91e2-11ea-a20c-0242ac110018-npl8s]: "my-hostname-basic-f144ddff-91e2-11ea-a20c-0242ac110018-npl8s", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:50:58.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-w7zgq" for this suite. May 9 10:51:04.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:51:04.588: INFO: namespace: e2e-tests-replicaset-w7zgq, resource: bindings, ignored listing per whitelist May 9 10:51:04.614: INFO: namespace e2e-tests-replicaset-w7zgq deletion completed in 6.080249426s • [SLOW TEST:16.228 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:51:04.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 10:51:04.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 9 10:51:04.830: INFO: stderr: "" May 9 10:51:04.830: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:51:04.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jl4m6" for this suite. May 9 10:51:10.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:51:10.963: INFO: namespace: e2e-tests-kubectl-jl4m6, resource: bindings, ignored listing per whitelist May 9 10:51:10.993: INFO: namespace e2e-tests-kubectl-jl4m6 deletion completed in 6.160183346s • [SLOW TEST:6.379 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:51:10.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 10:51:11.079: INFO: Creating deployment "nginx-deployment" May 9 10:51:11.126: INFO: Waiting for observed generation 1 May 9 10:51:13.206: INFO: Waiting for all required pods to come up May 9 10:51:13.212: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 9 10:51:25.301: INFO: Waiting for deployment "nginx-deployment" to complete May 9 10:51:25.306: INFO: Updating deployment "nginx-deployment" with a non-existent image May 9 10:51:25.310: INFO: Updating deployment nginx-deployment May 9 10:51:25.310: INFO: Waiting for observed generation 2 May 9 10:51:27.354: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 9 10:51:27.356: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 9 10:51:27.358: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 9 10:51:27.366: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 9 10:51:27.366: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 9 10:51:27.368: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 9 10:51:27.371: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 9 10:51:27.371: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 9 10:51:27.376: INFO: Updating deployment nginx-deployment May 9 10:51:27.376: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 9 10:51:27.944: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 9 10:51:28.138: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 9 10:51:29.207: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d4hzl/deployments/nginx-deployment,UID:febf36f2-91e2-11ea-99e8-0242ac110002,ResourceVersion:9573261,Generation:3,CreationTimestamp:2020-05-09 10:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-09 10:51:26 +0000 UTC 2020-05-09 10:51:11 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-05-09 10:51:27 +0000 UTC 2020-05-09 10:51:27 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 9 10:51:29.244: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d4hzl/replicasets/nginx-deployment-5c98f8fb5,UID:073ad951-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573248,Generation:3,CreationTimestamp:2020-05-09 10:51:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment febf36f2-91e2-11ea-99e8-0242ac110002 0xc0017d3fe7 0xc0017d3fe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 9 10:51:29.244: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 9 10:51:29.244: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d4hzl/replicasets/nginx-deployment-85ddf47c5d,UID:fec71783-91e2-11ea-99e8-0242ac110002,ResourceVersion:9573289,Generation:3,CreationTimestamp:2020-05-09 10:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment febf36f2-91e2-11ea-99e8-0242ac110002 0xc0017d40a7 0xc0017d40a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 9 10:51:29.522: INFO: Pod "nginx-deployment-5c98f8fb5-4j2mh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4j2mh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-5c98f8fb5-4j2mh,UID:0935625c-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573284,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 073ad951-91e3-11ea-99e8-0242ac110002 0xc0017d4f77 0xc0017d4f78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017d4ff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017d5010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.522: INFO: Pod "nginx-deployment-5c98f8fb5-5b49m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5b49m,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-5c98f8fb5-5b49m,UID:08ea7f66-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573268,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 073ad951-91e3-11ea-99e8-0242ac110002 0xc0017d5087 0xc0017d5088}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017d5100} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017d5120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.523: INFO: Pod "nginx-deployment-5c98f8fb5-5ck8p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5ck8p,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-5c98f8fb5-5ck8p,UID:07401992-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573228,Generation:0,CreationTimestamp:2020-05-09 10:51:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 073ad951-91e3-11ea-99e8-0242ac110002 0xc0017d5197 0xc0017d5198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017d5210} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017d5230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-09 10:51:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.523: INFO: Pod "nginx-deployment-5c98f8fb5-9vmbs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9vmbs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-5c98f8fb5-9vmbs,UID:09356aec-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573285,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 073ad951-91e3-11ea-99e8-0242ac110002 0xc0017d52f0 0xc0017d52f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017d5370} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017d5390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.523: INFO: Pod "nginx-deployment-5c98f8fb5-bpxwj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bpxwj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-5c98f8fb5-bpxwj,UID:09608fbb-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573302,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 073ad951-91e3-11ea-99e8-0242ac110002 0xc0017d5407 0xc0017d5408}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017d5480} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017d54a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:29 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.523: INFO: Pod "nginx-deployment-5c98f8fb5-dqh24" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dqh24,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-5c98f8fb5-dqh24,UID:09353ffd-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573287,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 073ad951-91e3-11ea-99e8-0242ac110002 0xc0017d5517 0xc0017d5518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017d5590} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017d55b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.523: INFO: Pod "nginx-deployment-5c98f8fb5-dw8x5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dw8x5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-5c98f8fb5-dw8x5,UID:0771e2a0-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573232,Generation:0,CreationTimestamp:2020-05-09 10:51:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 073ad951-91e3-11ea-99e8-0242ac110002 0xc0017d5627 0xc0017d5628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017d56a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017d56c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-09 10:51:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.523: INFO: Pod "nginx-deployment-5c98f8fb5-h6z45" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-h6z45,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-5c98f8fb5-h6z45,UID:08eaa449-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573271,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 073ad951-91e3-11ea-99e8-0242ac110002 0xc0017d5780 0xc0017d5781}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017d5800} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017d5820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.524: INFO: Pod "nginx-deployment-5c98f8fb5-mvlxj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mvlxj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-5c98f8fb5-mvlxj,UID:073d5200-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573208,Generation:0,CreationTimestamp:2020-05-09 10:51:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 073ad951-91e3-11ea-99e8-0242ac110002 0xc0017d5897 0xc0017d5898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017d5910} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017d5930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-09 10:51:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.524: INFO: Pod "nginx-deployment-5c98f8fb5-mxw7n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mxw7n,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-5c98f8fb5-mxw7n,UID:07401149-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573224,Generation:0,CreationTimestamp:2020-05-09 10:51:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 073ad951-91e3-11ea-99e8-0242ac110002 0xc0017d59f0 0xc0017d59f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017d5a70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017d5a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-09 10:51:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.524: INFO: Pod "nginx-deployment-5c98f8fb5-ptx8r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ptx8r,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-5c98f8fb5-ptx8r,UID:08ccca84-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573300,Generation:0,CreationTimestamp:2020-05-09 10:51:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 073ad951-91e3-11ea-99e8-0242ac110002 0xc0017d5b50 0xc0017d5b51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017d5bd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017d5bf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-09 10:51:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.524: INFO: Pod "nginx-deployment-5c98f8fb5-tw46r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tw46r,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-5c98f8fb5-tw46r,UID:07773dee-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573237,Generation:0,CreationTimestamp:2020-05-09 10:51:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 073ad951-91e3-11ea-99e8-0242ac110002 0xc0017d5cb0 0xc0017d5cb1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017d5d70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017d5d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-09 10:51:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.524: INFO: Pod "nginx-deployment-5c98f8fb5-zr9vr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zr9vr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-5c98f8fb5-zr9vr,UID:0935695e-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573294,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 073ad951-91e3-11ea-99e8-0242ac110002 0xc0017d5e50 0xc0017d5e51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017d5ed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017d5ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.524: INFO: Pod "nginx-deployment-85ddf47c5d-5wh2k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5wh2k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-5wh2k,UID:08eb251f-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573278,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc0017d5f67 0xc0017d5f68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017d5fe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001324000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.524: INFO: Pod "nginx-deployment-85ddf47c5d-6hznj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6hznj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-6hznj,UID:0935a75f-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573291,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc001324077 0xc001324078}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001324170} {node.kubernetes.io/unreachable Exists NoExecute 0xc001324190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.524: INFO: Pod "nginx-deployment-85ddf47c5d-6m62r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6m62r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-6m62r,UID:09359f3b-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573292,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc001324207 0xc001324208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001324280} {node.kubernetes.io/unreachable Exists NoExecute 0xc0013242a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.525: INFO: Pod "nginx-deployment-85ddf47c5d-7rcgl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7rcgl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-7rcgl,UID:fed48ee7-91e2-11ea-99e8-0242ac110002,ResourceVersion:9573141,Generation:0,CreationTimestamp:2020-05-09 10:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc0013243a7 0xc0013243a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001324420} {node.kubernetes.io/unreachable Exists NoExecute 0xc001324440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.146,StartTime:2020-05-09 10:51:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-09 10:51:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d23b7c457ade7b13ea0306a234c116739c0e2c17ebb56ab954d729ba215800dc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.525: INFO: Pod "nginx-deployment-85ddf47c5d-8282h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8282h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-8282h,UID:08eb09ef-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573274,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc001324577 0xc001324578}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0013245f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001324610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.525: INFO: Pod "nginx-deployment-85ddf47c5d-8kr9d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8kr9d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-8kr9d,UID:0935b74f-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573290,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc001324687 0xc001324688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001324700} {node.kubernetes.io/unreachable Exists NoExecute 0xc001324720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.525: INFO: Pod "nginx-deployment-85ddf47c5d-cfgjj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cfgjj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-cfgjj,UID:0935c378-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573295,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc0013247a7 0xc0013247a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001324820} {node.kubernetes.io/unreachable Exists NoExecute 0xc001324840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.525: INFO: Pod "nginx-deployment-85ddf47c5d-d8b5v" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-d8b5v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-d8b5v,UID:fedf8db6-91e2-11ea-99e8-0242ac110002,ResourceVersion:9573164,Generation:0,CreationTimestamp:2020-05-09 10:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc0013248b7 0xc0013248b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001324930} {node.kubernetes.io/unreachable Exists NoExecute 0xc001324950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.20,StartTime:2020-05-09 10:51:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-09 10:51:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://665b520f939f452ace7992ab52c981800470cdb066b704a623876cbb12428582}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.525: INFO: Pod "nginx-deployment-85ddf47c5d-fg92k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fg92k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-fg92k,UID:0935e6fe-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573296,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc001324a17 0xc001324a18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001324a90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001324ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.526: INFO: Pod "nginx-deployment-85ddf47c5d-g4qtn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g4qtn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-g4qtn,UID:08ccb6c6-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573257,Generation:0,CreationTimestamp:2020-05-09 10:51:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc001324b27 0xc001324b28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001324ba0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001324bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.526: INFO: Pod "nginx-deployment-85ddf47c5d-gpkkh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gpkkh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-gpkkh,UID:08eb0aa4-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573279,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc001324c67 0xc001324c68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001324ce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001324d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.526: INFO: Pod "nginx-deployment-85ddf47c5d-gtzzc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gtzzc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-gtzzc,UID:fed52e49-91e2-11ea-99e8-0242ac110002,ResourceVersion:9573130,Generation:0,CreationTimestamp:2020-05-09 10:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc001324d77 0xc001324d78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001324df0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001324e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.18,StartTime:2020-05-09 10:51:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-09 10:51:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a3ec6bf833d8ad50dcead8a79ea68349d18a4fc531044c82d00819222d29627b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.526: INFO: Pod "nginx-deployment-85ddf47c5d-l9mt9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l9mt9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-l9mt9,UID:fedd9154-91e2-11ea-99e8-0242ac110002,ResourceVersion:9573166,Generation:0,CreationTimestamp:2020-05-09 10:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc001324ed7 0xc001324ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001324f50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001324f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.149,StartTime:2020-05-09 10:51:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-09 10:51:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2232877bc523bff209e3c972dd5bb69ee8eb0914fbbcf36fc43fd706b89da200}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.526: INFO: Pod "nginx-deployment-85ddf47c5d-ld6fd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ld6fd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-ld6fd,UID:08ccd2d8-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573260,Generation:0,CreationTimestamp:2020-05-09 10:51:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc001325037 0xc001325038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0013250c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0013250e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.526: INFO: Pod "nginx-deployment-85ddf47c5d-lx6mn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lx6mn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-lx6mn,UID:feddac66-91e2-11ea-99e8-0242ac110002,ResourceVersion:9573138,Generation:0,CreationTimestamp:2020-05-09 10:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc001325157 0xc001325158}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0013251d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001325260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.19,StartTime:2020-05-09 10:51:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-09 10:51:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7f10cbff5633a2c18a05a52a526a78917d66a1ed3058d3788715dda967513958}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.526: INFO: Pod "nginx-deployment-85ddf47c5d-p927s" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-p927s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-p927s,UID:fedd9748-91e2-11ea-99e8-0242ac110002,ResourceVersion:9573157,Generation:0,CreationTimestamp:2020-05-09 10:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc001325327 0xc001325328}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0013253a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0013253c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.148,StartTime:2020-05-09 10:51:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-09 10:51:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a397ef79b447e2b2cff2170a0e987c9f686715c25d1844a2c962c1730cc312cf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.526: INFO: Pod "nginx-deployment-85ddf47c5d-srvw4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-srvw4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-srvw4,UID:08eafa6c-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573272,Generation:0,CreationTimestamp:2020-05-09 10:51:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc0013254f7 0xc0013254f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001325570} {node.kubernetes.io/unreachable Exists NoExecute 0xc001325590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.527: INFO: Pod "nginx-deployment-85ddf47c5d-t87j7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t87j7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-t87j7,UID:fed53598-91e2-11ea-99e8-0242ac110002,ResourceVersion:9573147,Generation:0,CreationTimestamp:2020-05-09 10:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc001325687 0xc001325688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001325700} {node.kubernetes.io/unreachable Exists NoExecute 0xc001325720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.147,StartTime:2020-05-09 10:51:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-09 10:51:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ca558f4a7d80a5f926398c66ab6904df439aac54907679c4d84d44dcb925b573}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.527: INFO: Pod "nginx-deployment-85ddf47c5d-v7cz4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v7cz4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-v7cz4,UID:fedda4a9-91e2-11ea-99e8-0242ac110002,ResourceVersion:9573169,Generation:0,CreationTimestamp:2020-05-09 10:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc001325897 0xc001325898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001325910} {node.kubernetes.io/unreachable Exists NoExecute 0xc001325930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.21,StartTime:2020-05-09 10:51:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-09 10:51:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3f5a25ba7e7c086c2cb97a0fcf75d69058a50ffc391350261bafc5e5d2a0a213}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 9 10:51:29.527: INFO: Pod "nginx-deployment-85ddf47c5d-xjmrk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xjmrk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d4hzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d4hzl/pods/nginx-deployment-85ddf47c5d-xjmrk,UID:08c75845-91e3-11ea-99e8-0242ac110002,ResourceVersion:9573297,Generation:0,CreationTimestamp:2020-05-09 10:51:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fec71783-91e2-11ea-99e8-0242ac110002 0xc001325b37 0xc001325b38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lrrqw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrrqw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lrrqw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001325bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001325bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 10:51:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-09 10:51:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:51:29.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-d4hzl" for this suite. May 9 10:51:49.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:51:49.864: INFO: namespace: e2e-tests-deployment-d4hzl, resource: bindings, ignored listing per whitelist May 9 10:51:49.891: INFO: namespace e2e-tests-deployment-d4hzl deletion completed in 20.206784369s • [SLOW TEST:38.897 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:51:49.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 9 10:51:50.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-d59km' May 9 10:51:50.307: INFO: stderr: "" May 9 10:51:50.307: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 9 10:52:00.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-d59km -o json' May 9 10:52:00.471: INFO: stderr: "" May 9 10:52:00.471: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-09T10:51:50Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-d59km\",\n \"resourceVersion\": \"9573611\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-d59km/pods/e2e-test-nginx-pod\",\n \"uid\": \"161de675-91e3-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-g5btc\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-g5btc\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-g5btc\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-09T10:51:50Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-09T10:51:56Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-09T10:51:56Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-09T10:51:50Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://adb0bf36ab7c3ab0da3d3e4a9d4841f6688e070590a6c591b7577439915abd02\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-09T10:51:55Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.164\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-09T10:51:50Z\"\n }\n}\n" STEP: replace the image in the pod May 9 10:52:00.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-d59km' May 9 10:52:01.637: INFO: stderr: "" May 9 10:52:01.637: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 9 10:52:01.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-d59km' May 9 10:52:11.741: INFO: stderr: "" May 9 10:52:11.741: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:52:11.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d59km" for this suite. May 9 10:52:19.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:52:19.863: INFO: namespace: e2e-tests-kubectl-d59km, resource: bindings, ignored listing per whitelist May 9 10:52:19.899: INFO: namespace e2e-tests-kubectl-d59km deletion completed in 8.138256548s • [SLOW TEST:30.008 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:52:19.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:52:59.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-dks6p" for this suite. May 9 10:53:07.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:53:07.849: INFO: namespace: e2e-tests-container-runtime-dks6p, resource: bindings, ignored listing per whitelist May 9 10:53:07.854: INFO: namespace e2e-tests-container-runtime-dks6p deletion completed in 8.14144364s • [SLOW TEST:47.954 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:53:07.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 10:53:08.427: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44a7e123-91e3-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-hvbjx" to be "success or failure" May 9 10:53:08.449: INFO: Pod "downwardapi-volume-44a7e123-91e3-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.083873ms May 9 10:53:10.454: INFO: Pod "downwardapi-volume-44a7e123-91e3-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0264699s May 9 10:53:12.463: INFO: Pod "downwardapi-volume-44a7e123-91e3-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036051867s May 9 10:53:14.474: INFO: Pod "downwardapi-volume-44a7e123-91e3-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047050516s STEP: Saw pod success May 9 10:53:14.474: INFO: Pod "downwardapi-volume-44a7e123-91e3-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 10:53:14.477: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-44a7e123-91e3-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 10:53:14.770: INFO: Waiting for pod downwardapi-volume-44a7e123-91e3-11ea-a20c-0242ac110018 to disappear May 9 10:53:15.019: INFO: Pod downwardapi-volume-44a7e123-91e3-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:53:15.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hvbjx" for this suite. May 9 10:53:23.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:53:23.403: INFO: namespace: e2e-tests-projected-hvbjx, resource: bindings, ignored listing per whitelist May 9 10:53:23.422: INFO: namespace e2e-tests-projected-hvbjx deletion completed in 8.332870958s • [SLOW TEST:15.568 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:53:23.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 9 10:53:35.796: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 9 10:53:35.960: INFO: Pod pod-with-prestop-http-hook still exists May 9 10:53:37.960: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 9 10:53:37.964: INFO: Pod pod-with-prestop-http-hook still exists May 9 10:53:39.960: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 9 10:53:39.963: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:53:39.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mfw2w" for this suite. May 9 10:54:03.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:54:04.015: INFO: namespace: e2e-tests-container-lifecycle-hook-mfw2w, resource: bindings, ignored listing per whitelist May 9 10:54:04.079: INFO: namespace e2e-tests-container-lifecycle-hook-mfw2w deletion completed in 24.108534694s • [SLOW TEST:40.656 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:54:04.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 9 10:54:04.304: INFO: Waiting up to 5m0s for pod "var-expansion-65f78bfa-91e3-11ea-a20c-0242ac110018" in namespace "e2e-tests-var-expansion-95l78" to be "success or failure" May 9 10:54:04.318: INFO: Pod "var-expansion-65f78bfa-91e3-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.934912ms May 9 10:54:06.360: INFO: Pod "var-expansion-65f78bfa-91e3-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055222645s May 9 10:54:08.364: INFO: Pod "var-expansion-65f78bfa-91e3-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059232533s May 9 10:54:10.368: INFO: Pod "var-expansion-65f78bfa-91e3-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063270261s STEP: Saw pod success May 9 10:54:10.368: INFO: Pod "var-expansion-65f78bfa-91e3-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 10:54:10.371: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-65f78bfa-91e3-11ea-a20c-0242ac110018 container dapi-container: STEP: delete the pod May 9 10:54:10.418: INFO: Waiting for pod var-expansion-65f78bfa-91e3-11ea-a20c-0242ac110018 to disappear May 9 10:54:10.431: INFO: Pod var-expansion-65f78bfa-91e3-11ea-a20c-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:54:10.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-95l78" for this suite. May 9 10:54:16.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:54:16.490: INFO: namespace: e2e-tests-var-expansion-95l78, resource: bindings, ignored listing per whitelist May 9 10:54:16.530: INFO: namespace e2e-tests-var-expansion-95l78 deletion completed in 6.095617834s • [SLOW TEST:12.451 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:54:16.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 9 10:54:16.693: INFO: Waiting up to 5m0s for pod "pod-6d5ac7be-91e3-11ea-a20c-0242ac110018" in namespace "e2e-tests-emptydir-lb9vc" to be "success or failure" May 9 10:54:16.758: INFO: Pod "pod-6d5ac7be-91e3-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 64.834182ms May 9 10:54:18.762: INFO: Pod "pod-6d5ac7be-91e3-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068520035s May 9 10:54:20.766: INFO: Pod "pod-6d5ac7be-91e3-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073165885s STEP: Saw pod success May 9 10:54:20.766: INFO: Pod "pod-6d5ac7be-91e3-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 10:54:20.769: INFO: Trying to get logs from node hunter-worker pod pod-6d5ac7be-91e3-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 10:54:20.812: INFO: Waiting for pod pod-6d5ac7be-91e3-11ea-a20c-0242ac110018 to disappear May 9 10:54:20.841: INFO: Pod pod-6d5ac7be-91e3-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:54:20.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lb9vc" for this suite. May 9 10:54:26.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:54:26.902: INFO: namespace: e2e-tests-emptydir-lb9vc, resource: bindings, ignored listing per whitelist May 9 10:54:26.936: INFO: namespace e2e-tests-emptydir-lb9vc deletion completed in 6.091626104s • [SLOW TEST:10.405 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:54:26.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 10:54:31.171: INFO: Waiting up to 5m0s for pod "client-envvars-76007c0e-91e3-11ea-a20c-0242ac110018" in namespace "e2e-tests-pods-jrcqn" to be "success or failure" May 9 10:54:31.232: INFO: Pod "client-envvars-76007c0e-91e3-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 61.438251ms May 9 10:54:33.236: INFO: Pod "client-envvars-76007c0e-91e3-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065309705s May 9 10:54:35.240: INFO: Pod "client-envvars-76007c0e-91e3-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.069121883s May 9 10:54:37.244: INFO: Pod "client-envvars-76007c0e-91e3-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.073310203s STEP: Saw pod success May 9 10:54:37.244: INFO: Pod "client-envvars-76007c0e-91e3-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 10:54:37.248: INFO: Trying to get logs from node hunter-worker pod client-envvars-76007c0e-91e3-11ea-a20c-0242ac110018 container env3cont: STEP: delete the pod May 9 10:54:37.267: INFO: Waiting for pod client-envvars-76007c0e-91e3-11ea-a20c-0242ac110018 to disappear May 9 10:54:37.332: INFO: Pod client-envvars-76007c0e-91e3-11ea-a20c-0242ac110018 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:54:37.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-jrcqn" for this suite. May 9 10:55:23.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:55:23.451: INFO: namespace: e2e-tests-pods-jrcqn, resource: bindings, ignored listing per whitelist May 9 10:55:23.455: INFO: namespace e2e-tests-pods-jrcqn deletion completed in 46.088099012s • [SLOW TEST:56.519 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:55:23.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 10:55:23.632: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 9 10:55:28.636: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 9 10:55:28.637: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 9 10:55:28.656: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-kvw4s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kvw4s/deployments/test-cleanup-deployment,UID:98444905-91e3-11ea-99e8-0242ac110002,ResourceVersion:9574292,Generation:1,CreationTimestamp:2020-05-09 10:55:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 9 10:55:28.658: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:55:28.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-kvw4s" for this suite. May 9 10:55:34.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:55:34.828: INFO: namespace: e2e-tests-deployment-kvw4s, resource: bindings, ignored listing per whitelist May 9 10:55:34.835: INFO: namespace e2e-tests-deployment-kvw4s deletion completed in 6.130369875s • [SLOW TEST:11.379 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:55:34.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 9 10:55:35.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dr5bs' May 9 10:55:35.281: INFO: stderr: "" May 9 10:55:35.281: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 9 10:55:36.286: INFO: Selector matched 1 pods for map[app:redis] May 9 10:55:36.286: INFO: Found 0 / 1 May 9 10:55:37.286: INFO: Selector matched 1 pods for map[app:redis] May 9 10:55:37.286: INFO: Found 0 / 1 May 9 10:55:38.286: INFO: Selector matched 1 pods for map[app:redis] May 9 10:55:38.286: INFO: Found 0 / 1 May 9 10:55:39.286: INFO: Selector matched 1 pods for map[app:redis] May 9 10:55:39.286: INFO: Found 1 / 1 May 9 10:55:39.286: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 9 10:55:39.288: INFO: Selector matched 1 pods for map[app:redis] May 9 10:55:39.288: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 9 10:55:39.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-vnnq9 --namespace=e2e-tests-kubectl-dr5bs -p {"metadata":{"annotations":{"x":"y"}}}' May 9 10:55:39.392: INFO: stderr: "" May 9 10:55:39.392: INFO: stdout: "pod/redis-master-vnnq9 patched\n" STEP: checking annotations May 9 10:55:39.395: INFO: Selector matched 1 pods for map[app:redis] May 9 10:55:39.395: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:55:39.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dr5bs" for this suite. May 9 10:56:01.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:56:01.451: INFO: namespace: e2e-tests-kubectl-dr5bs, resource: bindings, ignored listing per whitelist May 9 10:56:01.494: INFO: namespace e2e-tests-kubectl-dr5bs deletion completed in 22.092045191s • [SLOW TEST:26.659 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:56:01.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 9 10:56:01.573: INFO: Waiting up to 5m0s for pod "pod-abe40bd8-91e3-11ea-a20c-0242ac110018" in namespace "e2e-tests-emptydir-n7f9q" to be "success or failure" May 9 10:56:01.595: INFO: Pod "pod-abe40bd8-91e3-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.570071ms May 9 10:56:03.597: INFO: Pod "pod-abe40bd8-91e3-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024307852s May 9 10:56:05.602: INFO: Pod "pod-abe40bd8-91e3-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.028830471s May 9 10:56:07.607: INFO: Pod "pod-abe40bd8-91e3-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033480371s STEP: Saw pod success May 9 10:56:07.607: INFO: Pod "pod-abe40bd8-91e3-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 10:56:07.610: INFO: Trying to get logs from node hunter-worker pod pod-abe40bd8-91e3-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 10:56:07.642: INFO: Waiting for pod pod-abe40bd8-91e3-11ea-a20c-0242ac110018 to disappear May 9 10:56:07.655: INFO: Pod pod-abe40bd8-91e3-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:56:07.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-n7f9q" for this suite. May 9 10:56:13.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:56:13.735: INFO: namespace: e2e-tests-emptydir-n7f9q, resource: bindings, ignored listing per whitelist May 9 10:56:13.768: INFO: namespace e2e-tests-emptydir-n7f9q deletion completed in 6.109050789s • [SLOW TEST:12.274 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:56:13.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 9 10:56:13.892: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:13.938: INFO: Number of nodes with available pods: 0 May 9 10:56:13.938: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:14.942: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:14.944: INFO: Number of nodes with available pods: 0 May 9 10:56:14.944: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:16.065: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:16.069: INFO: Number of nodes with available pods: 0 May 9 10:56:16.069: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:16.943: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:16.946: INFO: Number of nodes with available pods: 0 May 9 10:56:16.946: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:17.943: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:17.946: INFO: Number of nodes with available pods: 1 May 9 10:56:17.946: INFO: Node hunter-worker2 is running more than one daemon pod May 9 10:56:18.941: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:18.944: INFO: Number of nodes with available pods: 2 May 9 10:56:18.944: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 9 10:56:18.973: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:18.975: INFO: Number of nodes with available pods: 1 May 9 10:56:18.975: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:20.004: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:20.007: INFO: Number of nodes with available pods: 1 May 9 10:56:20.007: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:20.980: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:20.984: INFO: Number of nodes with available pods: 1 May 9 10:56:20.984: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:22.000: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:22.049: INFO: Number of nodes with available pods: 1 May 9 10:56:22.049: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:22.980: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:22.984: INFO: Number of nodes with available pods: 1 May 9 10:56:22.984: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:23.979: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:23.983: INFO: Number of nodes with available pods: 1 May 9 10:56:23.983: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:24.979: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:24.982: INFO: Number of nodes with available pods: 1 May 9 10:56:24.982: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:25.979: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:25.983: INFO: Number of nodes with available pods: 1 May 9 10:56:25.983: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:26.980: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:26.982: INFO: Number of nodes with available pods: 1 May 9 10:56:26.982: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:27.980: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:27.983: INFO: Number of nodes with available pods: 1 May 9 10:56:27.983: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:28.980: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:28.984: INFO: Number of nodes with available pods: 1 May 9 10:56:28.984: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:29.979: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:29.982: INFO: Number of nodes with available pods: 1 May 9 10:56:29.982: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:30.980: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:30.983: INFO: Number of nodes with available pods: 1 May 9 10:56:30.983: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:32.006: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:32.220: INFO: Number of nodes with available pods: 1 May 9 10:56:32.220: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:33.023: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:33.026: INFO: Number of nodes with available pods: 1 May 9 10:56:33.026: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:34.207: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:34.213: INFO: Number of nodes with available pods: 1 May 9 10:56:34.213: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:34.979: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:34.981: INFO: Number of nodes with available pods: 1 May 9 10:56:34.981: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:35.980: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:35.984: INFO: Number of nodes with available pods: 1 May 9 10:56:35.984: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:37.269: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:37.579: INFO: Number of nodes with available pods: 1 May 9 10:56:37.579: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:37.979: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:37.982: INFO: Number of nodes with available pods: 1 May 9 10:56:37.982: INFO: Node hunter-worker is running more than one daemon pod May 9 10:56:38.979: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 10:56:38.983: INFO: Number of nodes with available pods: 2 May 9 10:56:38.983: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-7lh4l, will wait for the garbage collector to delete the pods May 9 10:56:39.045: INFO: Deleting DaemonSet.extensions daemon-set took: 6.46943ms May 9 10:56:39.245: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.246409ms May 9 10:56:51.349: INFO: Number of nodes with available pods: 0 May 9 10:56:51.349: INFO: Number of running nodes: 0, number of available pods: 0 May 9 10:56:51.356: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-7lh4l/daemonsets","resourceVersion":"9574602"},"items":null} May 9 10:56:51.359: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-7lh4l/pods","resourceVersion":"9574602"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:56:51.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-7lh4l" for this suite. May 9 10:56:57.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:56:57.448: INFO: namespace: e2e-tests-daemonsets-7lh4l, resource: bindings, ignored listing per whitelist May 9 10:56:57.457: INFO: namespace e2e-tests-daemonsets-7lh4l deletion completed in 6.084921459s • [SLOW TEST:43.689 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:56:57.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-4xjb STEP: Creating a pod to test atomic-volume-subpath May 9 10:56:57.721: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-4xjb" in namespace "e2e-tests-subpath-xmq49" to be "success or failure" May 9 10:56:57.725: INFO: Pod "pod-subpath-test-downwardapi-4xjb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.949332ms May 9 10:56:59.729: INFO: Pod "pod-subpath-test-downwardapi-4xjb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00770387s May 9 10:57:01.733: INFO: Pod "pod-subpath-test-downwardapi-4xjb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012077074s May 9 10:57:03.738: INFO: Pod "pod-subpath-test-downwardapi-4xjb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016346606s May 9 10:57:05.742: INFO: Pod "pod-subpath-test-downwardapi-4xjb": Phase="Running", Reason="", readiness=false. Elapsed: 8.020963228s May 9 10:57:07.746: INFO: Pod "pod-subpath-test-downwardapi-4xjb": Phase="Running", Reason="", readiness=false. Elapsed: 10.024477836s May 9 10:57:09.750: INFO: Pod "pod-subpath-test-downwardapi-4xjb": Phase="Running", Reason="", readiness=false. Elapsed: 12.02876778s May 9 10:57:11.754: INFO: Pod "pod-subpath-test-downwardapi-4xjb": Phase="Running", Reason="", readiness=false. Elapsed: 14.032755922s May 9 10:57:13.758: INFO: Pod "pod-subpath-test-downwardapi-4xjb": Phase="Running", Reason="", readiness=false. Elapsed: 16.036530751s May 9 10:57:15.761: INFO: Pod "pod-subpath-test-downwardapi-4xjb": Phase="Running", Reason="", readiness=false. Elapsed: 18.040080694s May 9 10:57:17.765: INFO: Pod "pod-subpath-test-downwardapi-4xjb": Phase="Running", Reason="", readiness=false. Elapsed: 20.043294s May 9 10:57:19.768: INFO: Pod "pod-subpath-test-downwardapi-4xjb": Phase="Running", Reason="", readiness=false. Elapsed: 22.046974024s May 9 10:57:21.772: INFO: Pod "pod-subpath-test-downwardapi-4xjb": Phase="Running", Reason="", readiness=false. Elapsed: 24.050321181s May 9 10:57:23.868: INFO: Pod "pod-subpath-test-downwardapi-4xjb": Phase="Running", Reason="", readiness=false. Elapsed: 26.146595404s May 9 10:57:25.872: INFO: Pod "pod-subpath-test-downwardapi-4xjb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.15049165s STEP: Saw pod success May 9 10:57:25.872: INFO: Pod "pod-subpath-test-downwardapi-4xjb" satisfied condition "success or failure" May 9 10:57:25.875: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-4xjb container test-container-subpath-downwardapi-4xjb: STEP: delete the pod May 9 10:57:25.911: INFO: Waiting for pod pod-subpath-test-downwardapi-4xjb to disappear May 9 10:57:25.923: INFO: Pod pod-subpath-test-downwardapi-4xjb no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-4xjb May 9 10:57:25.924: INFO: Deleting pod "pod-subpath-test-downwardapi-4xjb" in namespace "e2e-tests-subpath-xmq49" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:57:25.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-xmq49" for this suite. May 9 10:57:33.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:57:33.951: INFO: namespace: e2e-tests-subpath-xmq49, resource: bindings, ignored listing per whitelist May 9 10:57:34.088: INFO: namespace e2e-tests-subpath-xmq49 deletion completed in 8.158508628s • [SLOW TEST:36.631 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:57:34.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 10:57:34.230: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 6.404654ms) May 9 10:57:34.234: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.892186ms) May 9 10:57:34.237: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.192356ms) May 9 10:57:34.241: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.293667ms) May 9 10:57:34.244: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.579794ms) May 9 10:57:34.247: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.202584ms) May 9 10:57:34.250: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.042709ms) May 9 10:57:34.253: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.135584ms) May 9 10:57:34.255: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.916219ms) May 9 10:57:34.258: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.652914ms) May 9 10:57:34.261: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.453245ms) May 9 10:57:34.263: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.5252ms) May 9 10:57:34.266: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.803292ms) May 9 10:57:34.269: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.366826ms) May 9 10:57:34.272: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.294234ms) May 9 10:57:34.293: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 21.182183ms) May 9 10:57:34.297: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.70399ms) May 9 10:57:34.301: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.162325ms) May 9 10:57:34.304: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.262461ms) May 9 10:57:34.308: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.393846ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:57:34.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-k5vx6" for this suite. May 9 10:57:40.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:57:40.392: INFO: namespace: e2e-tests-proxy-k5vx6, resource: bindings, ignored listing per whitelist May 9 10:57:40.416: INFO: namespace e2e-tests-proxy-k5vx6 deletion completed in 6.105315959s • [SLOW TEST:6.328 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:57:40.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-e6ef94fc-91e3-11ea-a20c-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-e6ef956a-91e3-11ea-a20c-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-e6ef94fc-91e3-11ea-a20c-0242ac110018 STEP: Updating configmap cm-test-opt-upd-e6ef956a-91e3-11ea-a20c-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-e6ef9597-91e3-11ea-a20c-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:57:51.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xvf5z" for this suite. May 9 10:58:15.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:58:15.094: INFO: namespace: e2e-tests-configmap-xvf5z, resource: bindings, ignored listing per whitelist May 9 10:58:15.126: INFO: namespace e2e-tests-configmap-xvf5z deletion completed in 24.102061571s • [SLOW TEST:34.710 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:58:15.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xwbz7 May 9 10:58:19.251: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xwbz7 STEP: checking the pod's current state and verifying that restartCount is present May 9 10:58:19.254: INFO: Initial restart count of pod liveness-http is 0 May 9 10:58:39.498: INFO: Restart count of pod e2e-tests-container-probe-xwbz7/liveness-http is now 1 (20.243374101s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:58:39.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xwbz7" for this suite. May 9 10:58:46.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:58:46.081: INFO: namespace: e2e-tests-container-probe-xwbz7, resource: bindings, ignored listing per whitelist May 9 10:58:46.117: INFO: namespace e2e-tests-container-probe-xwbz7 deletion completed in 6.139982025s • [SLOW TEST:30.991 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:58:46.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 9 10:58:46.309: INFO: namespace e2e-tests-kubectl-ttghg May 9 10:58:46.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ttghg' May 9 10:58:50.667: INFO: stderr: "" May 9 10:58:50.667: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 9 10:58:51.672: INFO: Selector matched 1 pods for map[app:redis] May 9 10:58:51.672: INFO: Found 0 / 1 May 9 10:58:52.719: INFO: Selector matched 1 pods for map[app:redis] May 9 10:58:52.719: INFO: Found 0 / 1 May 9 10:58:53.743: INFO: Selector matched 1 pods for map[app:redis] May 9 10:58:53.743: INFO: Found 0 / 1 May 9 10:58:54.672: INFO: Selector matched 1 pods for map[app:redis] May 9 10:58:54.672: INFO: Found 1 / 1 May 9 10:58:54.672: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 9 10:58:54.675: INFO: Selector matched 1 pods for map[app:redis] May 9 10:58:54.675: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 9 10:58:54.675: INFO: wait on redis-master startup in e2e-tests-kubectl-ttghg May 9 10:58:54.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-v6bqb redis-master --namespace=e2e-tests-kubectl-ttghg' May 9 10:58:54.776: INFO: stderr: "" May 9 10:58:54.776: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 09 May 10:58:53.981 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 May 10:58:53.981 # Server started, Redis version 3.2.12\n1:M 09 May 10:58:53.981 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 May 10:58:53.981 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 9 10:58:54.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-ttghg' May 9 10:58:54.955: INFO: stderr: "" May 9 10:58:54.955: INFO: stdout: "service/rm2 exposed\n" May 9 10:58:54.994: INFO: Service rm2 in namespace e2e-tests-kubectl-ttghg found. STEP: exposing service May 9 10:58:57.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-ttghg' May 9 10:58:57.297: INFO: stderr: "" May 9 10:58:57.297: INFO: stdout: "service/rm3 exposed\n" May 9 10:58:57.380: INFO: Service rm3 in namespace e2e-tests-kubectl-ttghg found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:58:59.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ttghg" for this suite. May 9 10:59:21.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:59:21.510: INFO: namespace: e2e-tests-kubectl-ttghg, resource: bindings, ignored listing per whitelist May 9 10:59:21.512: INFO: namespace e2e-tests-kubectl-ttghg deletion completed in 22.12001083s • [SLOW TEST:35.394 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:59:21.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 9 10:59:21.614: INFO: Waiting up to 5m0s for pod "pod-231fd071-91e4-11ea-a20c-0242ac110018" in namespace "e2e-tests-emptydir-sjr64" to be "success or failure" May 9 10:59:21.683: INFO: Pod "pod-231fd071-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 69.216047ms May 9 10:59:23.731: INFO: Pod "pod-231fd071-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116914184s May 9 10:59:25.749: INFO: Pod "pod-231fd071-91e4-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135332282s STEP: Saw pod success May 9 10:59:25.749: INFO: Pod "pod-231fd071-91e4-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 10:59:25.751: INFO: Trying to get logs from node hunter-worker2 pod pod-231fd071-91e4-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 10:59:26.052: INFO: Waiting for pod pod-231fd071-91e4-11ea-a20c-0242ac110018 to disappear May 9 10:59:26.180: INFO: Pod pod-231fd071-91e4-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:59:26.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sjr64" for this suite. May 9 10:59:32.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:59:32.250: INFO: namespace: e2e-tests-emptydir-sjr64, resource: bindings, ignored listing per whitelist May 9 10:59:32.281: INFO: namespace e2e-tests-emptydir-sjr64 deletion completed in 6.097289873s • [SLOW TEST:10.769 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:59:32.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-29b686fd-91e4-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume secrets May 9 10:59:32.727: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-29be4b5f-91e4-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-brhgw" to be "success or failure" May 9 10:59:32.748: INFO: Pod "pod-projected-secrets-29be4b5f-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.914278ms May 9 10:59:35.043: INFO: Pod "pod-projected-secrets-29be4b5f-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316129407s May 9 10:59:37.048: INFO: Pod "pod-projected-secrets-29be4b5f-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320902197s May 9 10:59:39.053: INFO: Pod "pod-projected-secrets-29be4b5f-91e4-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.3256595s May 9 10:59:41.058: INFO: Pod "pod-projected-secrets-29be4b5f-91e4-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.330660632s STEP: Saw pod success May 9 10:59:41.058: INFO: Pod "pod-projected-secrets-29be4b5f-91e4-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 10:59:41.061: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-29be4b5f-91e4-11ea-a20c-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 9 10:59:41.145: INFO: Waiting for pod pod-projected-secrets-29be4b5f-91e4-11ea-a20c-0242ac110018 to disappear May 9 10:59:41.343: INFO: Pod pod-projected-secrets-29be4b5f-91e4-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:59:41.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-brhgw" for this suite. May 9 10:59:47.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:59:47.421: INFO: namespace: e2e-tests-projected-brhgw, resource: bindings, ignored listing per whitelist May 9 10:59:47.473: INFO: namespace e2e-tests-projected-brhgw deletion completed in 6.126281776s • [SLOW TEST:15.192 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:59:47.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-3295e0f9-91e4-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 10:59:47.581: INFO: Waiting up to 5m0s for pod "pod-configmaps-329a1ec0-91e4-11ea-a20c-0242ac110018" in namespace "e2e-tests-configmap-94t72" to be "success or failure" May 9 10:59:47.583: INFO: Pod "pod-configmaps-329a1ec0-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1.924837ms May 9 10:59:49.588: INFO: Pod "pod-configmaps-329a1ec0-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006441978s May 9 10:59:51.592: INFO: Pod "pod-configmaps-329a1ec0-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011003393s May 9 10:59:53.596: INFO: Pod "pod-configmaps-329a1ec0-91e4-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014822072s STEP: Saw pod success May 9 10:59:53.596: INFO: Pod "pod-configmaps-329a1ec0-91e4-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 10:59:53.599: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-329a1ec0-91e4-11ea-a20c-0242ac110018 container configmap-volume-test: STEP: delete the pod May 9 10:59:53.620: INFO: Waiting for pod pod-configmaps-329a1ec0-91e4-11ea-a20c-0242ac110018 to disappear May 9 10:59:53.624: INFO: Pod pod-configmaps-329a1ec0-91e4-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 10:59:53.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-94t72" for this suite. May 9 10:59:59.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 10:59:59.683: INFO: namespace: e2e-tests-configmap-94t72, resource: bindings, ignored listing per whitelist May 9 10:59:59.730: INFO: namespace e2e-tests-configmap-94t72 deletion completed in 6.102568061s • [SLOW TEST:12.256 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 10:59:59.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 9 10:59:59.861: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2sqs8,SelfLink:/api/v1/namespaces/e2e-tests-watch-2sqs8/configmaps/e2e-watch-test-label-changed,UID:39e4ae7d-91e4-11ea-99e8-0242ac110002,ResourceVersion:9575236,Generation:0,CreationTimestamp:2020-05-09 10:59:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 9 10:59:59.861: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2sqs8,SelfLink:/api/v1/namespaces/e2e-tests-watch-2sqs8/configmaps/e2e-watch-test-label-changed,UID:39e4ae7d-91e4-11ea-99e8-0242ac110002,ResourceVersion:9575237,Generation:0,CreationTimestamp:2020-05-09 10:59:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 9 10:59:59.862: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2sqs8,SelfLink:/api/v1/namespaces/e2e-tests-watch-2sqs8/configmaps/e2e-watch-test-label-changed,UID:39e4ae7d-91e4-11ea-99e8-0242ac110002,ResourceVersion:9575238,Generation:0,CreationTimestamp:2020-05-09 10:59:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 9 11:00:09.898: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2sqs8,SelfLink:/api/v1/namespaces/e2e-tests-watch-2sqs8/configmaps/e2e-watch-test-label-changed,UID:39e4ae7d-91e4-11ea-99e8-0242ac110002,ResourceVersion:9575259,Generation:0,CreationTimestamp:2020-05-09 10:59:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 9 11:00:09.899: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2sqs8,SelfLink:/api/v1/namespaces/e2e-tests-watch-2sqs8/configmaps/e2e-watch-test-label-changed,UID:39e4ae7d-91e4-11ea-99e8-0242ac110002,ResourceVersion:9575260,Generation:0,CreationTimestamp:2020-05-09 10:59:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 9 11:00:09.899: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2sqs8,SelfLink:/api/v1/namespaces/e2e-tests-watch-2sqs8/configmaps/e2e-watch-test-label-changed,UID:39e4ae7d-91e4-11ea-99e8-0242ac110002,ResourceVersion:9575261,Generation:0,CreationTimestamp:2020-05-09 10:59:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:00:09.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-2sqs8" for this suite. May 9 11:00:15.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:00:15.947: INFO: namespace: e2e-tests-watch-2sqs8, resource: bindings, ignored listing per whitelist May 9 11:00:15.999: INFO: namespace e2e-tests-watch-2sqs8 deletion completed in 6.08807068s • [SLOW TEST:16.269 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:00:16.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 11:00:16.458: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43ceab8b-91e4-11ea-a20c-0242ac110018" in namespace "e2e-tests-downward-api-pnnzj" to be "success or failure" May 9 11:00:16.481: INFO: Pod "downwardapi-volume-43ceab8b-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.598033ms May 9 11:00:18.512: INFO: Pod "downwardapi-volume-43ceab8b-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054489765s May 9 11:00:20.517: INFO: Pod "downwardapi-volume-43ceab8b-91e4-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.059025508s May 9 11:00:22.521: INFO: Pod "downwardapi-volume-43ceab8b-91e4-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063277132s STEP: Saw pod success May 9 11:00:22.521: INFO: Pod "downwardapi-volume-43ceab8b-91e4-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:00:22.524: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-43ceab8b-91e4-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 11:00:22.579: INFO: Waiting for pod downwardapi-volume-43ceab8b-91e4-11ea-a20c-0242ac110018 to disappear May 9 11:00:22.583: INFO: Pod downwardapi-volume-43ceab8b-91e4-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:00:22.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pnnzj" for this suite. May 9 11:00:30.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:00:30.783: INFO: namespace: e2e-tests-downward-api-pnnzj, resource: bindings, ignored listing per whitelist May 9 11:00:30.828: INFO: namespace e2e-tests-downward-api-pnnzj deletion completed in 8.242149486s • [SLOW TEST:14.829 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:00:30.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 9 11:00:32.889: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-rjtqf,SelfLink:/api/v1/namespaces/e2e-tests-watch-rjtqf/configmaps/e2e-watch-test-resource-version,UID:4cf1fc60-91e4-11ea-99e8-0242ac110002,ResourceVersion:9575339,Generation:0,CreationTimestamp:2020-05-09 11:00:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 9 11:00:32.889: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-rjtqf,SelfLink:/api/v1/namespaces/e2e-tests-watch-rjtqf/configmaps/e2e-watch-test-resource-version,UID:4cf1fc60-91e4-11ea-99e8-0242ac110002,ResourceVersion:9575341,Generation:0,CreationTimestamp:2020-05-09 11:00:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:00:32.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-rjtqf" for this suite. May 9 11:00:39.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:00:39.636: INFO: namespace: e2e-tests-watch-rjtqf, resource: bindings, ignored listing per whitelist May 9 11:00:39.698: INFO: namespace e2e-tests-watch-rjtqf deletion completed in 6.616841344s • [SLOW TEST:8.869 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:00:39.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-522d3328-91e4-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume secrets May 9 11:00:41.211: INFO: Waiting up to 5m0s for pod "pod-secrets-5257e7b6-91e4-11ea-a20c-0242ac110018" in namespace "e2e-tests-secrets-jzwnb" to be "success or failure" May 9 11:00:41.301: INFO: Pod "pod-secrets-5257e7b6-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 89.458824ms May 9 11:00:43.305: INFO: Pod "pod-secrets-5257e7b6-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093600577s May 9 11:00:45.468: INFO: Pod "pod-secrets-5257e7b6-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25665185s May 9 11:00:47.870: INFO: Pod "pod-secrets-5257e7b6-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.658332347s May 9 11:00:49.960: INFO: Pod "pod-secrets-5257e7b6-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.748738312s May 9 11:00:52.002: INFO: Pod "pod-secrets-5257e7b6-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.790289228s May 9 11:00:54.194: INFO: Pod "pod-secrets-5257e7b6-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.982502144s May 9 11:00:56.198: INFO: Pod "pod-secrets-5257e7b6-91e4-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.986773558s STEP: Saw pod success May 9 11:00:56.198: INFO: Pod "pod-secrets-5257e7b6-91e4-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:00:56.201: INFO: Trying to get logs from node hunter-worker pod pod-secrets-5257e7b6-91e4-11ea-a20c-0242ac110018 container secret-volume-test: STEP: delete the pod May 9 11:00:56.264: INFO: Waiting for pod pod-secrets-5257e7b6-91e4-11ea-a20c-0242ac110018 to disappear May 9 11:00:56.667: INFO: Pod pod-secrets-5257e7b6-91e4-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:00:56.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-jzwnb" for this suite. May 9 11:01:02.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:01:02.993: INFO: namespace: e2e-tests-secrets-jzwnb, resource: bindings, ignored listing per whitelist May 9 11:01:02.998: INFO: namespace e2e-tests-secrets-jzwnb deletion completed in 6.326694987s • [SLOW TEST:23.300 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:01:02.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 9 11:01:03.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qhsj2' May 9 11:01:03.508: INFO: stderr: "" May 9 11:01:03.508: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 9 11:01:03.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qhsj2' May 9 11:01:03.661: INFO: stderr: "" May 9 11:01:03.661: INFO: stdout: "update-demo-nautilus-vkq22 update-demo-nautilus-xgsrn " May 9 11:01:03.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vkq22 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qhsj2' May 9 11:01:03.792: INFO: stderr: "" May 9 11:01:03.792: INFO: stdout: "" May 9 11:01:03.792: INFO: update-demo-nautilus-vkq22 is created but not running May 9 11:01:08.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qhsj2' May 9 11:01:08.910: INFO: stderr: "" May 9 11:01:08.910: INFO: stdout: "update-demo-nautilus-vkq22 update-demo-nautilus-xgsrn " May 9 11:01:08.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vkq22 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qhsj2' May 9 11:01:09.009: INFO: stderr: "" May 9 11:01:09.009: INFO: stdout: "true" May 9 11:01:09.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vkq22 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qhsj2' May 9 11:01:09.117: INFO: stderr: "" May 9 11:01:09.117: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 11:01:09.117: INFO: validating pod update-demo-nautilus-vkq22 May 9 11:01:09.405: INFO: got data: { "image": "nautilus.jpg" } May 9 11:01:09.405: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 11:01:09.406: INFO: update-demo-nautilus-vkq22 is verified up and running May 9 11:01:09.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xgsrn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qhsj2' May 9 11:01:09.511: INFO: stderr: "" May 9 11:01:09.511: INFO: stdout: "true" May 9 11:01:09.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xgsrn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qhsj2' May 9 11:01:09.643: INFO: stderr: "" May 9 11:01:09.643: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 11:01:09.643: INFO: validating pod update-demo-nautilus-xgsrn May 9 11:01:09.648: INFO: got data: { "image": "nautilus.jpg" } May 9 11:01:09.648: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 11:01:09.648: INFO: update-demo-nautilus-xgsrn is verified up and running STEP: using delete to clean up resources May 9 11:01:09.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qhsj2' May 9 11:01:09.750: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 11:01:09.750: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 9 11:01:09.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-qhsj2' May 9 11:01:09.866: INFO: stderr: "No resources found.\n" May 9 11:01:09.866: INFO: stdout: "" May 9 11:01:09.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-qhsj2 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 9 11:01:09.970: INFO: stderr: "" May 9 11:01:09.970: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:01:09.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qhsj2" for this suite. May 9 11:01:31.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:01:32.025: INFO: namespace: e2e-tests-kubectl-qhsj2, resource: bindings, ignored listing per whitelist May 9 11:01:32.079: INFO: namespace e2e-tests-kubectl-qhsj2 deletion completed in 22.105884293s • [SLOW TEST:29.081 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:01:32.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-jb2g7 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 9 11:01:32.238: INFO: Found 0 stateful pods, waiting for 3 May 9 11:01:42.243: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 9 11:01:42.243: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 9 11:01:42.243: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 9 11:01:52.243: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 9 11:01:52.243: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 9 11:01:52.244: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 9 11:01:52.273: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 9 11:02:02.334: INFO: Updating stateful set ss2 May 9 11:02:02.359: INFO: Waiting for Pod e2e-tests-statefulset-jb2g7/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 9 11:02:12.535: INFO: Found 2 stateful pods, waiting for 3 May 9 11:02:22.540: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 9 11:02:22.540: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 9 11:02:22.540: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 9 11:02:22.562: INFO: Updating stateful set ss2 May 9 11:02:22.617: INFO: Waiting for Pod e2e-tests-statefulset-jb2g7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 9 11:02:32.643: INFO: Updating stateful set ss2 May 9 11:02:32.667: INFO: Waiting for StatefulSet e2e-tests-statefulset-jb2g7/ss2 to complete update May 9 11:02:32.667: INFO: Waiting for Pod e2e-tests-statefulset-jb2g7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 9 11:02:42.675: INFO: Waiting for StatefulSet e2e-tests-statefulset-jb2g7/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 9 11:02:52.675: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jb2g7 May 9 11:02:52.679: INFO: Scaling statefulset ss2 to 0 May 9 11:03:22.698: INFO: Waiting for statefulset status.replicas updated to 0 May 9 11:03:22.702: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:03:22.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-jb2g7" for this suite. May 9 11:03:30.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:03:30.822: INFO: namespace: e2e-tests-statefulset-jb2g7, resource: bindings, ignored listing per whitelist May 9 11:03:30.882: INFO: namespace e2e-tests-statefulset-jb2g7 deletion completed in 8.132530748s • [SLOW TEST:118.802 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:03:30.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:03:31.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-7qb86" for this suite. May 9 11:03:55.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:03:55.156: INFO: namespace: e2e-tests-pods-7qb86, resource: bindings, ignored listing per whitelist May 9 11:03:55.223: INFO: namespace e2e-tests-pods-7qb86 deletion completed in 24.133073039s • [SLOW TEST:24.341 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:03:55.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 11:03:55.378: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6450784-91e4-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-lnj8j" to be "success or failure" May 9 11:03:55.391: INFO: Pod "downwardapi-volume-c6450784-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.752508ms May 9 11:03:57.447: INFO: Pod "downwardapi-volume-c6450784-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069362008s May 9 11:03:59.451: INFO: Pod "downwardapi-volume-c6450784-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073657523s May 9 11:04:01.455: INFO: Pod "downwardapi-volume-c6450784-91e4-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077379405s STEP: Saw pod success May 9 11:04:01.455: INFO: Pod "downwardapi-volume-c6450784-91e4-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:04:01.459: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-c6450784-91e4-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 11:04:01.480: INFO: Waiting for pod downwardapi-volume-c6450784-91e4-11ea-a20c-0242ac110018 to disappear May 9 11:04:01.485: INFO: Pod downwardapi-volume-c6450784-91e4-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:04:01.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lnj8j" for this suite. May 9 11:04:07.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:04:07.510: INFO: namespace: e2e-tests-projected-lnj8j, resource: bindings, ignored listing per whitelist May 9 11:04:07.597: INFO: namespace e2e-tests-projected-lnj8j deletion completed in 6.109670015s • [SLOW TEST:12.374 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:04:07.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 9 11:04:07.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 9 11:04:07.860: INFO: stderr: "" May 9 11:04:07.860: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:04:07.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mn6hb" for this suite. May 9 11:04:13.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:04:13.959: INFO: namespace: e2e-tests-kubectl-mn6hb, resource: bindings, ignored listing per whitelist May 9 11:04:13.964: INFO: namespace e2e-tests-kubectl-mn6hb deletion completed in 6.100169132s • [SLOW TEST:6.366 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:04:13.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-vms74 STEP: creating a selector STEP: Creating the service pods in kubernetes May 9 11:04:14.043: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 9 11:04:44.192: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.60:8080/dial?request=hostName&protocol=udp&host=10.244.1.59&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-vms74 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:04:44.192: INFO: >>> kubeConfig: /root/.kube/config I0509 11:04:44.226475 6 log.go:172] (0xc000db18c0) (0xc001986780) Create stream I0509 11:04:44.226510 6 log.go:172] (0xc000db18c0) (0xc001986780) Stream added, broadcasting: 1 I0509 11:04:44.229178 6 log.go:172] (0xc000db18c0) Reply frame received for 1 I0509 11:04:44.229216 6 log.go:172] (0xc000db18c0) (0xc001ea43c0) Create stream I0509 11:04:44.229232 6 log.go:172] (0xc000db18c0) (0xc001ea43c0) Stream added, broadcasting: 3 I0509 11:04:44.230056 6 log.go:172] (0xc000db18c0) Reply frame received for 3 I0509 11:04:44.230073 6 log.go:172] (0xc000db18c0) (0xc001ea4460) Create stream I0509 11:04:44.230078 6 log.go:172] (0xc000db18c0) (0xc001ea4460) Stream added, broadcasting: 5 I0509 11:04:44.230986 6 log.go:172] (0xc000db18c0) Reply frame received for 5 I0509 11:04:44.306429 6 log.go:172] (0xc000db18c0) Data frame received for 3 I0509 11:04:44.306466 6 log.go:172] (0xc001ea43c0) (3) Data frame handling I0509 11:04:44.306504 6 log.go:172] (0xc001ea43c0) (3) Data frame sent I0509 11:04:44.307071 6 log.go:172] (0xc000db18c0) Data frame received for 3 I0509 11:04:44.307102 6 log.go:172] (0xc001ea43c0) (3) Data frame handling I0509 11:04:44.307386 6 log.go:172] (0xc000db18c0) Data frame received for 5 I0509 11:04:44.307407 6 log.go:172] (0xc001ea4460) (5) Data frame handling I0509 11:04:44.308792 6 log.go:172] (0xc000db18c0) Data frame received for 1 I0509 11:04:44.308813 6 log.go:172] (0xc001986780) (1) Data frame handling I0509 11:04:44.308830 6 log.go:172] (0xc001986780) (1) Data frame sent I0509 11:04:44.309006 6 log.go:172] (0xc000db18c0) (0xc001986780) Stream removed, broadcasting: 1 I0509 11:04:44.309063 6 log.go:172] (0xc000db18c0) Go away received I0509 11:04:44.309125 6 log.go:172] (0xc000db18c0) (0xc001986780) Stream removed, broadcasting: 1 I0509 11:04:44.309168 6 log.go:172] (0xc000db18c0) (0xc001ea43c0) Stream removed, broadcasting: 3 I0509 11:04:44.309204 6 log.go:172] (0xc000db18c0) (0xc001ea4460) Stream removed, broadcasting: 5 May 9 11:04:44.309: INFO: Waiting for endpoints: map[] May 9 11:04:44.312: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.60:8080/dial?request=hostName&protocol=udp&host=10.244.2.180&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-vms74 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:04:44.312: INFO: >>> kubeConfig: /root/.kube/config I0509 11:04:44.339042 6 log.go:172] (0xc0000ebce0) (0xc001a92320) Create stream I0509 11:04:44.339073 6 log.go:172] (0xc0000ebce0) (0xc001a92320) Stream added, broadcasting: 1 I0509 11:04:44.340726 6 log.go:172] (0xc0000ebce0) Reply frame received for 1 I0509 11:04:44.340793 6 log.go:172] (0xc0000ebce0) (0xc001e9a640) Create stream I0509 11:04:44.340811 6 log.go:172] (0xc0000ebce0) (0xc001e9a640) Stream added, broadcasting: 3 I0509 11:04:44.342028 6 log.go:172] (0xc0000ebce0) Reply frame received for 3 I0509 11:04:44.342061 6 log.go:172] (0xc0000ebce0) (0xc001986820) Create stream I0509 11:04:44.342071 6 log.go:172] (0xc0000ebce0) (0xc001986820) Stream added, broadcasting: 5 I0509 11:04:44.342943 6 log.go:172] (0xc0000ebce0) Reply frame received for 5 I0509 11:04:44.409325 6 log.go:172] (0xc0000ebce0) Data frame received for 3 I0509 11:04:44.409350 6 log.go:172] (0xc001e9a640) (3) Data frame handling I0509 11:04:44.409363 6 log.go:172] (0xc001e9a640) (3) Data frame sent I0509 11:04:44.409721 6 log.go:172] (0xc0000ebce0) Data frame received for 3 I0509 11:04:44.409736 6 log.go:172] (0xc001e9a640) (3) Data frame handling I0509 11:04:44.409852 6 log.go:172] (0xc0000ebce0) Data frame received for 5 I0509 11:04:44.409871 6 log.go:172] (0xc001986820) (5) Data frame handling I0509 11:04:44.411312 6 log.go:172] (0xc0000ebce0) Data frame received for 1 I0509 11:04:44.411336 6 log.go:172] (0xc001a92320) (1) Data frame handling I0509 11:04:44.411353 6 log.go:172] (0xc001a92320) (1) Data frame sent I0509 11:04:44.411368 6 log.go:172] (0xc0000ebce0) (0xc001a92320) Stream removed, broadcasting: 1 I0509 11:04:44.411388 6 log.go:172] (0xc0000ebce0) Go away received I0509 11:04:44.411506 6 log.go:172] (0xc0000ebce0) (0xc001a92320) Stream removed, broadcasting: 1 I0509 11:04:44.411522 6 log.go:172] (0xc0000ebce0) (0xc001e9a640) Stream removed, broadcasting: 3 I0509 11:04:44.411532 6 log.go:172] (0xc0000ebce0) (0xc001986820) Stream removed, broadcasting: 5 May 9 11:04:44.411: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:04:44.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-vms74" for this suite. May 9 11:05:06.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:05:06.672: INFO: namespace: e2e-tests-pod-network-test-vms74, resource: bindings, ignored listing per whitelist May 9 11:05:06.702: INFO: namespace e2e-tests-pod-network-test-vms74 deletion completed in 22.287060079s • [SLOW TEST:52.738 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:05:06.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f0dc68fa-91e4-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume secrets May 9 11:05:06.812: INFO: Waiting up to 5m0s for pod "pod-secrets-f0df934d-91e4-11ea-a20c-0242ac110018" in namespace "e2e-tests-secrets-ck7hc" to be "success or failure" May 9 11:05:06.816: INFO: Pod "pod-secrets-f0df934d-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099932ms May 9 11:05:08.819: INFO: Pod "pod-secrets-f0df934d-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007190041s May 9 11:05:10.823: INFO: Pod "pod-secrets-f0df934d-91e4-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011171846s STEP: Saw pod success May 9 11:05:10.824: INFO: Pod "pod-secrets-f0df934d-91e4-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:05:10.827: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-f0df934d-91e4-11ea-a20c-0242ac110018 container secret-volume-test: STEP: delete the pod May 9 11:05:10.873: INFO: Waiting for pod pod-secrets-f0df934d-91e4-11ea-a20c-0242ac110018 to disappear May 9 11:05:10.900: INFO: Pod pod-secrets-f0df934d-91e4-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:05:10.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ck7hc" for this suite. May 9 11:05:16.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:05:17.026: INFO: namespace: e2e-tests-secrets-ck7hc, resource: bindings, ignored listing per whitelist May 9 11:05:17.044: INFO: namespace e2e-tests-secrets-ck7hc deletion completed in 6.104674374s • [SLOW TEST:10.342 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:05:17.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 9 11:05:17.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 9 11:05:17.217: INFO: stderr: "" May 9 11:05:17.217: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:05:17.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tktq5" for this suite. May 9 11:05:23.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:05:23.294: INFO: namespace: e2e-tests-kubectl-tktq5, resource: bindings, ignored listing per whitelist May 9 11:05:23.318: INFO: namespace e2e-tests-kubectl-tktq5 deletion completed in 6.098586079s • [SLOW TEST:6.274 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:05:23.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-faca86f8-91e4-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 11:05:23.466: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-facb01bd-91e4-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-mdvqt" to be "success or failure" May 9 11:05:23.475: INFO: Pod "pod-projected-configmaps-facb01bd-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.445158ms May 9 11:05:25.479: INFO: Pod "pod-projected-configmaps-facb01bd-91e4-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013151457s May 9 11:05:27.502: INFO: Pod "pod-projected-configmaps-facb01bd-91e4-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.036632034s May 9 11:05:29.507: INFO: Pod "pod-projected-configmaps-facb01bd-91e4-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04102099s STEP: Saw pod success May 9 11:05:29.507: INFO: Pod "pod-projected-configmaps-facb01bd-91e4-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:05:29.510: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-facb01bd-91e4-11ea-a20c-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 9 11:05:29.530: INFO: Waiting for pod pod-projected-configmaps-facb01bd-91e4-11ea-a20c-0242ac110018 to disappear May 9 11:05:29.541: INFO: Pod pod-projected-configmaps-facb01bd-91e4-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:05:29.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mdvqt" for this suite. May 9 11:05:35.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:05:35.701: INFO: namespace: e2e-tests-projected-mdvqt, resource: bindings, ignored listing per whitelist May 9 11:05:35.703: INFO: namespace e2e-tests-projected-mdvqt deletion completed in 6.158331969s • [SLOW TEST:12.384 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:05:35.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-022f02c9-91e5-11ea-a20c-0242ac110018 May 9 11:05:35.847: INFO: Pod name my-hostname-basic-022f02c9-91e5-11ea-a20c-0242ac110018: Found 0 pods out of 1 May 9 11:05:40.851: INFO: Pod name my-hostname-basic-022f02c9-91e5-11ea-a20c-0242ac110018: Found 1 pods out of 1 May 9 11:05:40.851: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-022f02c9-91e5-11ea-a20c-0242ac110018" are running May 9 11:05:40.853: INFO: Pod "my-hostname-basic-022f02c9-91e5-11ea-a20c-0242ac110018-vcbr7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 11:05:35 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 11:05:38 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 11:05:38 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-09 11:05:35 +0000 UTC Reason: Message:}]) May 9 11:05:40.853: INFO: Trying to dial the pod May 9 11:05:45.863: INFO: Controller my-hostname-basic-022f02c9-91e5-11ea-a20c-0242ac110018: Got expected result from replica 1 [my-hostname-basic-022f02c9-91e5-11ea-a20c-0242ac110018-vcbr7]: "my-hostname-basic-022f02c9-91e5-11ea-a20c-0242ac110018-vcbr7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:05:45.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-nqw6r" for this suite. May 9 11:05:51.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:05:52.003: INFO: namespace: e2e-tests-replication-controller-nqw6r, resource: bindings, ignored listing per whitelist May 9 11:05:52.003: INFO: namespace e2e-tests-replication-controller-nqw6r deletion completed in 6.136759337s • [SLOW TEST:16.300 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:05:52.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-sj78w STEP: creating a selector STEP: Creating the service pods in kubernetes May 9 11:05:52.166: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 9 11:06:20.310: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.63:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-sj78w PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:06:20.310: INFO: >>> kubeConfig: /root/.kube/config I0509 11:06:20.337235 6 log.go:172] (0xc0000ebe40) (0xc000c99900) Create stream I0509 11:06:20.337272 6 log.go:172] (0xc0000ebe40) (0xc000c99900) Stream added, broadcasting: 1 I0509 11:06:20.338868 6 log.go:172] (0xc0000ebe40) Reply frame received for 1 I0509 11:06:20.338908 6 log.go:172] (0xc0000ebe40) (0xc000c99b80) Create stream I0509 11:06:20.338922 6 log.go:172] (0xc0000ebe40) (0xc000c99b80) Stream added, broadcasting: 3 I0509 11:06:20.339784 6 log.go:172] (0xc0000ebe40) Reply frame received for 3 I0509 11:06:20.339815 6 log.go:172] (0xc0000ebe40) (0xc001c57360) Create stream I0509 11:06:20.339825 6 log.go:172] (0xc0000ebe40) (0xc001c57360) Stream added, broadcasting: 5 I0509 11:06:20.340641 6 log.go:172] (0xc0000ebe40) Reply frame received for 5 I0509 11:06:20.423069 6 log.go:172] (0xc0000ebe40) Data frame received for 5 I0509 11:06:20.423115 6 log.go:172] (0xc001c57360) (5) Data frame handling I0509 11:06:20.423155 6 log.go:172] (0xc0000ebe40) Data frame received for 3 I0509 11:06:20.423178 6 log.go:172] (0xc000c99b80) (3) Data frame handling I0509 11:06:20.423195 6 log.go:172] (0xc000c99b80) (3) Data frame sent I0509 11:06:20.423206 6 log.go:172] (0xc0000ebe40) Data frame received for 3 I0509 11:06:20.423214 6 log.go:172] (0xc000c99b80) (3) Data frame handling I0509 11:06:20.424609 6 log.go:172] (0xc0000ebe40) Data frame received for 1 I0509 11:06:20.424643 6 log.go:172] (0xc000c99900) (1) Data frame handling I0509 11:06:20.424675 6 log.go:172] (0xc000c99900) (1) Data frame sent I0509 11:06:20.424701 6 log.go:172] (0xc0000ebe40) (0xc000c99900) Stream removed, broadcasting: 1 I0509 11:06:20.424741 6 log.go:172] (0xc0000ebe40) Go away received I0509 11:06:20.424797 6 log.go:172] (0xc0000ebe40) (0xc000c99900) Stream removed, broadcasting: 1 I0509 11:06:20.424809 6 log.go:172] (0xc0000ebe40) (0xc000c99b80) Stream removed, broadcasting: 3 I0509 11:06:20.424820 6 log.go:172] (0xc0000ebe40) (0xc001c57360) Stream removed, broadcasting: 5 May 9 11:06:20.424: INFO: Found all expected endpoints: [netserver-0] May 9 11:06:20.434: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.182:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-sj78w PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:06:20.434: INFO: >>> kubeConfig: /root/.kube/config I0509 11:06:20.460379 6 log.go:172] (0xc0016362c0) (0xc0003b14a0) Create stream I0509 11:06:20.460416 6 log.go:172] (0xc0016362c0) (0xc0003b14a0) Stream added, broadcasting: 1 I0509 11:06:20.462619 6 log.go:172] (0xc0016362c0) Reply frame received for 1 I0509 11:06:20.462651 6 log.go:172] (0xc0016362c0) (0xc0000fdc20) Create stream I0509 11:06:20.462662 6 log.go:172] (0xc0016362c0) (0xc0000fdc20) Stream added, broadcasting: 3 I0509 11:06:20.463482 6 log.go:172] (0xc0016362c0) Reply frame received for 3 I0509 11:06:20.463514 6 log.go:172] (0xc0016362c0) (0xc0003b1720) Create stream I0509 11:06:20.463532 6 log.go:172] (0xc0016362c0) (0xc0003b1720) Stream added, broadcasting: 5 I0509 11:06:20.464403 6 log.go:172] (0xc0016362c0) Reply frame received for 5 I0509 11:06:20.536014 6 log.go:172] (0xc0016362c0) Data frame received for 5 I0509 11:06:20.536055 6 log.go:172] (0xc0003b1720) (5) Data frame handling I0509 11:06:20.536083 6 log.go:172] (0xc0016362c0) Data frame received for 3 I0509 11:06:20.536094 6 log.go:172] (0xc0000fdc20) (3) Data frame handling I0509 11:06:20.536106 6 log.go:172] (0xc0000fdc20) (3) Data frame sent I0509 11:06:20.536116 6 log.go:172] (0xc0016362c0) Data frame received for 3 I0509 11:06:20.536124 6 log.go:172] (0xc0000fdc20) (3) Data frame handling I0509 11:06:20.537484 6 log.go:172] (0xc0016362c0) Data frame received for 1 I0509 11:06:20.537498 6 log.go:172] (0xc0003b14a0) (1) Data frame handling I0509 11:06:20.537509 6 log.go:172] (0xc0003b14a0) (1) Data frame sent I0509 11:06:20.537521 6 log.go:172] (0xc0016362c0) (0xc0003b14a0) Stream removed, broadcasting: 1 I0509 11:06:20.537535 6 log.go:172] (0xc0016362c0) Go away received I0509 11:06:20.537703 6 log.go:172] (0xc0016362c0) (0xc0003b14a0) Stream removed, broadcasting: 1 I0509 11:06:20.537721 6 log.go:172] (0xc0016362c0) (0xc0000fdc20) Stream removed, broadcasting: 3 I0509 11:06:20.537733 6 log.go:172] (0xc0016362c0) (0xc0003b1720) Stream removed, broadcasting: 5 May 9 11:06:20.537: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:06:20.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-sj78w" for this suite. May 9 11:06:44.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:06:44.577: INFO: namespace: e2e-tests-pod-network-test-sj78w, resource: bindings, ignored listing per whitelist May 9 11:06:44.664: INFO: namespace e2e-tests-pod-network-test-sj78w deletion completed in 24.123545481s • [SLOW TEST:52.661 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:06:44.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-2b45b99d-91e5-11ea-a20c-0242ac110018 STEP: Creating secret with name secret-projected-all-test-volume-2b45b963-91e5-11ea-a20c-0242ac110018 STEP: Creating a pod to test Check all projections for projected volume plugin May 9 11:06:44.808: INFO: Waiting up to 5m0s for pod "projected-volume-2b45b8fe-91e5-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-wgczp" to be "success or failure" May 9 11:06:44.812: INFO: Pod "projected-volume-2b45b8fe-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.60062ms May 9 11:06:46.816: INFO: Pod "projected-volume-2b45b8fe-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007292766s May 9 11:06:49.751: INFO: Pod "projected-volume-2b45b8fe-91e5-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.942520915s STEP: Saw pod success May 9 11:06:49.751: INFO: Pod "projected-volume-2b45b8fe-91e5-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:06:49.977: INFO: Trying to get logs from node hunter-worker pod projected-volume-2b45b8fe-91e5-11ea-a20c-0242ac110018 container projected-all-volume-test: STEP: delete the pod May 9 11:06:50.216: INFO: Waiting for pod projected-volume-2b45b8fe-91e5-11ea-a20c-0242ac110018 to disappear May 9 11:06:50.249: INFO: Pod projected-volume-2b45b8fe-91e5-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:06:50.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wgczp" for this suite. May 9 11:06:56.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:06:56.385: INFO: namespace: e2e-tests-projected-wgczp, resource: bindings, ignored listing per whitelist May 9 11:06:56.423: INFO: namespace e2e-tests-projected-wgczp deletion completed in 6.140100602s • [SLOW TEST:11.758 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:06:56.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-3250181e-91e5-11ea-a20c-0242ac110018 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-3250181e-91e5-11ea-a20c-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:08:23.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-brjdp" for this suite. May 9 11:08:45.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:08:46.196: INFO: namespace: e2e-tests-projected-brjdp, resource: bindings, ignored listing per whitelist May 9 11:08:46.203: INFO: namespace e2e-tests-projected-brjdp deletion completed in 22.769662476s • [SLOW TEST:109.780 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:08:46.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 11:08:46.390: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73bf19de-91e5-11ea-a20c-0242ac110018" in namespace "e2e-tests-downward-api-96642" to be "success or failure" May 9 11:08:46.422: INFO: Pod "downwardapi-volume-73bf19de-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 31.982929ms May 9 11:08:48.427: INFO: Pod "downwardapi-volume-73bf19de-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036289871s May 9 11:08:50.430: INFO: Pod "downwardapi-volume-73bf19de-91e5-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039983127s STEP: Saw pod success May 9 11:08:50.430: INFO: Pod "downwardapi-volume-73bf19de-91e5-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:08:50.433: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-73bf19de-91e5-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 11:08:50.520: INFO: Waiting for pod downwardapi-volume-73bf19de-91e5-11ea-a20c-0242ac110018 to disappear May 9 11:08:50.646: INFO: Pod downwardapi-volume-73bf19de-91e5-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:08:50.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-96642" for this suite. May 9 11:08:56.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:08:56.737: INFO: namespace: e2e-tests-downward-api-96642, resource: bindings, ignored listing per whitelist May 9 11:08:56.751: INFO: namespace e2e-tests-downward-api-96642 deletion completed in 6.099379359s • [SLOW TEST:10.547 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:08:56.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 11:08:56.896: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a040edf-91e5-11ea-a20c-0242ac110018" in namespace "e2e-tests-downward-api-h9d5l" to be "success or failure" May 9 11:08:56.913: INFO: Pod "downwardapi-volume-7a040edf-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.751314ms May 9 11:08:58.918: INFO: Pod "downwardapi-volume-7a040edf-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02126225s May 9 11:09:00.922: INFO: Pod "downwardapi-volume-7a040edf-91e5-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025859642s STEP: Saw pod success May 9 11:09:00.922: INFO: Pod "downwardapi-volume-7a040edf-91e5-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:09:00.926: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7a040edf-91e5-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 11:09:00.974: INFO: Waiting for pod downwardapi-volume-7a040edf-91e5-11ea-a20c-0242ac110018 to disappear May 9 11:09:00.983: INFO: Pod downwardapi-volume-7a040edf-91e5-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:09:00.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-h9d5l" for this suite. May 9 11:09:07.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:09:07.034: INFO: namespace: e2e-tests-downward-api-h9d5l, resource: bindings, ignored listing per whitelist May 9 11:09:07.096: INFO: namespace e2e-tests-downward-api-h9d5l deletion completed in 6.091733749s • [SLOW TEST:10.345 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:09:07.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0509 11:09:17.301385 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 9 11:09:17.301: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:09:17.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-5k5mt" for this suite. May 9 11:09:23.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:09:23.436: INFO: namespace: e2e-tests-gc-5k5mt, resource: bindings, ignored listing per whitelist May 9 11:09:23.456: INFO: namespace e2e-tests-gc-5k5mt deletion completed in 6.151977848s • [SLOW TEST:16.360 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:09:23.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 11:09:23.593: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 9 11:09:23.641: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:23.643: INFO: Number of nodes with available pods: 0 May 9 11:09:23.643: INFO: Node hunter-worker is running more than one daemon pod May 9 11:09:24.648: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:24.652: INFO: Number of nodes with available pods: 0 May 9 11:09:24.652: INFO: Node hunter-worker is running more than one daemon pod May 9 11:09:25.648: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:25.652: INFO: Number of nodes with available pods: 0 May 9 11:09:25.652: INFO: Node hunter-worker is running more than one daemon pod May 9 11:09:26.656: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:26.660: INFO: Number of nodes with available pods: 0 May 9 11:09:26.660: INFO: Node hunter-worker is running more than one daemon pod May 9 11:09:27.647: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:27.715: INFO: Number of nodes with available pods: 1 May 9 11:09:27.715: INFO: Node hunter-worker2 is running more than one daemon pod May 9 11:09:28.646: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:28.648: INFO: Number of nodes with available pods: 2 May 9 11:09:28.649: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 9 11:09:28.673: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:28.673: INFO: Wrong image for pod: daemon-set-zmrzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:28.691: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:29.710: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:29.710: INFO: Wrong image for pod: daemon-set-zmrzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:29.714: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:30.696: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:30.696: INFO: Wrong image for pod: daemon-set-zmrzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:30.700: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:31.694: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:31.694: INFO: Wrong image for pod: daemon-set-zmrzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:31.698: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:32.695: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:32.695: INFO: Wrong image for pod: daemon-set-zmrzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:32.695: INFO: Pod daemon-set-zmrzq is not available May 9 11:09:32.700: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:33.695: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:33.695: INFO: Wrong image for pod: daemon-set-zmrzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:33.695: INFO: Pod daemon-set-zmrzq is not available May 9 11:09:33.699: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:34.696: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:34.696: INFO: Wrong image for pod: daemon-set-zmrzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:34.696: INFO: Pod daemon-set-zmrzq is not available May 9 11:09:34.700: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:35.703: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:35.703: INFO: Wrong image for pod: daemon-set-zmrzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:35.703: INFO: Pod daemon-set-zmrzq is not available May 9 11:09:35.708: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:36.696: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:36.696: INFO: Wrong image for pod: daemon-set-zmrzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:36.696: INFO: Pod daemon-set-zmrzq is not available May 9 11:09:36.700: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:37.710: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:37.710: INFO: Wrong image for pod: daemon-set-zmrzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:37.710: INFO: Pod daemon-set-zmrzq is not available May 9 11:09:37.713: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:38.696: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:38.696: INFO: Wrong image for pod: daemon-set-zmrzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:38.696: INFO: Pod daemon-set-zmrzq is not available May 9 11:09:38.700: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:39.696: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:39.696: INFO: Wrong image for pod: daemon-set-zmrzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:39.696: INFO: Pod daemon-set-zmrzq is not available May 9 11:09:39.699: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:40.695: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:40.696: INFO: Wrong image for pod: daemon-set-zmrzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:40.696: INFO: Pod daemon-set-zmrzq is not available May 9 11:09:40.700: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:41.727: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:41.727: INFO: Wrong image for pod: daemon-set-zmrzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:41.727: INFO: Pod daemon-set-zmrzq is not available May 9 11:09:41.731: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:42.696: INFO: Pod daemon-set-77bbj is not available May 9 11:09:42.696: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:42.700: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:43.694: INFO: Pod daemon-set-77bbj is not available May 9 11:09:43.694: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:43.698: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:44.695: INFO: Pod daemon-set-77bbj is not available May 9 11:09:44.696: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:44.700: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:45.695: INFO: Pod daemon-set-77bbj is not available May 9 11:09:45.695: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:45.702: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:46.700: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:46.704: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:47.696: INFO: Wrong image for pod: daemon-set-hblsg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 9 11:09:47.696: INFO: Pod daemon-set-hblsg is not available May 9 11:09:47.702: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:48.695: INFO: Pod daemon-set-l95xf is not available May 9 11:09:48.698: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 9 11:09:48.702: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:48.705: INFO: Number of nodes with available pods: 1 May 9 11:09:48.705: INFO: Node hunter-worker is running more than one daemon pod May 9 11:09:49.872: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:49.906: INFO: Number of nodes with available pods: 1 May 9 11:09:49.906: INFO: Node hunter-worker is running more than one daemon pod May 9 11:09:50.711: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:50.715: INFO: Number of nodes with available pods: 1 May 9 11:09:50.715: INFO: Node hunter-worker is running more than one daemon pod May 9 11:09:51.711: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 11:09:51.714: INFO: Number of nodes with available pods: 2 May 9 11:09:51.714: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-cjj84, will wait for the garbage collector to delete the pods May 9 11:09:51.788: INFO: Deleting DaemonSet.extensions daemon-set took: 6.436499ms May 9 11:09:51.888: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.233819ms May 9 11:10:01.792: INFO: Number of nodes with available pods: 0 May 9 11:10:01.792: INFO: Number of running nodes: 0, number of available pods: 0 May 9 11:10:01.795: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-cjj84/daemonsets","resourceVersion":"9577260"},"items":null} May 9 11:10:01.798: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-cjj84/pods","resourceVersion":"9577260"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:10:01.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-cjj84" for this suite. May 9 11:10:07.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:10:07.908: INFO: namespace: e2e-tests-daemonsets-cjj84, resource: bindings, ignored listing per whitelist May 9 11:10:07.924: INFO: namespace e2e-tests-daemonsets-cjj84 deletion completed in 6.093387118s • [SLOW TEST:44.467 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:10:07.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-a469c6dc-91e5-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 11:10:08.077: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a46a75e9-91e5-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-q7v6g" to be "success or failure" May 9 11:10:08.080: INFO: Pod "pod-projected-configmaps-a46a75e9-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.661574ms May 9 11:10:10.098: INFO: Pod "pod-projected-configmaps-a46a75e9-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020732358s May 9 11:10:12.102: INFO: Pod "pod-projected-configmaps-a46a75e9-91e5-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02452811s STEP: Saw pod success May 9 11:10:12.102: INFO: Pod "pod-projected-configmaps-a46a75e9-91e5-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:10:12.105: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-a46a75e9-91e5-11ea-a20c-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 9 11:10:12.227: INFO: Waiting for pod pod-projected-configmaps-a46a75e9-91e5-11ea-a20c-0242ac110018 to disappear May 9 11:10:12.386: INFO: Pod pod-projected-configmaps-a46a75e9-91e5-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:10:12.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q7v6g" for this suite. May 9 11:10:18.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:10:18.489: INFO: namespace: e2e-tests-projected-q7v6g, resource: bindings, ignored listing per whitelist May 9 11:10:18.494: INFO: namespace e2e-tests-projected-q7v6g deletion completed in 6.104104844s • [SLOW TEST:10.570 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:10:18.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 9 11:10:18.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x2wnt' May 9 11:10:21.113: INFO: stderr: "" May 9 11:10:21.113: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 9 11:10:22.183: INFO: Selector matched 1 pods for map[app:redis] May 9 11:10:22.183: INFO: Found 0 / 1 May 9 11:10:23.117: INFO: Selector matched 1 pods for map[app:redis] May 9 11:10:23.117: INFO: Found 0 / 1 May 9 11:10:24.117: INFO: Selector matched 1 pods for map[app:redis] May 9 11:10:24.117: INFO: Found 0 / 1 May 9 11:10:25.118: INFO: Selector matched 1 pods for map[app:redis] May 9 11:10:25.118: INFO: Found 1 / 1 May 9 11:10:25.118: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 9 11:10:25.121: INFO: Selector matched 1 pods for map[app:redis] May 9 11:10:25.121: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 9 11:10:25.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-m7m4z redis-master --namespace=e2e-tests-kubectl-x2wnt' May 9 11:10:25.237: INFO: stderr: "" May 9 11:10:25.237: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 09 May 11:10:23.746 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 May 11:10:23.746 # Server started, Redis version 3.2.12\n1:M 09 May 11:10:23.747 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 May 11:10:23.747 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 9 11:10:25.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-m7m4z redis-master --namespace=e2e-tests-kubectl-x2wnt --tail=1' May 9 11:10:25.356: INFO: stderr: "" May 9 11:10:25.356: INFO: stdout: "1:M 09 May 11:10:23.747 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 9 11:10:25.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-m7m4z redis-master --namespace=e2e-tests-kubectl-x2wnt --limit-bytes=1' May 9 11:10:25.481: INFO: stderr: "" May 9 11:10:25.481: INFO: stdout: " " STEP: exposing timestamps May 9 11:10:25.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-m7m4z redis-master --namespace=e2e-tests-kubectl-x2wnt --tail=1 --timestamps' May 9 11:10:25.582: INFO: stderr: "" May 9 11:10:25.582: INFO: stdout: "2020-05-09T11:10:23.74727047Z 1:M 09 May 11:10:23.747 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 9 11:10:28.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-m7m4z redis-master --namespace=e2e-tests-kubectl-x2wnt --since=1s' May 9 11:10:28.238: INFO: stderr: "" May 9 11:10:28.238: INFO: stdout: "" May 9 11:10:28.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-m7m4z redis-master --namespace=e2e-tests-kubectl-x2wnt --since=24h' May 9 11:10:28.340: INFO: stderr: "" May 9 11:10:28.340: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 09 May 11:10:23.746 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 May 11:10:23.746 # Server started, Redis version 3.2.12\n1:M 09 May 11:10:23.747 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 May 11:10:23.747 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 9 11:10:28.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-x2wnt' May 9 11:10:28.468: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 11:10:28.468: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 9 11:10:28.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-x2wnt' May 9 11:10:28.594: INFO: stderr: "No resources found.\n" May 9 11:10:28.594: INFO: stdout: "" May 9 11:10:28.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-x2wnt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 9 11:10:28.698: INFO: stderr: "" May 9 11:10:28.698: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:10:28.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-x2wnt" for this suite. May 9 11:10:34.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:10:34.929: INFO: namespace: e2e-tests-kubectl-x2wnt, resource: bindings, ignored listing per whitelist May 9 11:10:34.931: INFO: namespace e2e-tests-kubectl-x2wnt deletion completed in 6.229756069s • [SLOW TEST:16.437 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:10:34.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 9 11:10:35.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-px69l' May 9 11:10:35.122: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 9 11:10:35.122: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 9 11:10:35.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-px69l' May 9 11:10:35.278: INFO: stderr: "" May 9 11:10:35.278: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:10:35.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-px69l" for this suite. May 9 11:10:57.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:10:57.365: INFO: namespace: e2e-tests-kubectl-px69l, resource: bindings, ignored listing per whitelist May 9 11:10:57.390: INFO: namespace e2e-tests-kubectl-px69l deletion completed in 22.103873048s • [SLOW TEST:22.459 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:10:57.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-pkdgh/configmap-test-c1e192c8-91e5-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 11:10:57.479: INFO: Waiting up to 5m0s for pod "pod-configmaps-c1e2583e-91e5-11ea-a20c-0242ac110018" in namespace "e2e-tests-configmap-pkdgh" to be "success or failure" May 9 11:10:57.489: INFO: Pod "pod-configmaps-c1e2583e-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.710674ms May 9 11:10:59.493: INFO: Pod "pod-configmaps-c1e2583e-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014738972s May 9 11:11:01.497: INFO: Pod "pod-configmaps-c1e2583e-91e5-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018529873s STEP: Saw pod success May 9 11:11:01.497: INFO: Pod "pod-configmaps-c1e2583e-91e5-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:11:01.500: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-c1e2583e-91e5-11ea-a20c-0242ac110018 container env-test: STEP: delete the pod May 9 11:11:01.533: INFO: Waiting for pod pod-configmaps-c1e2583e-91e5-11ea-a20c-0242ac110018 to disappear May 9 11:11:01.543: INFO: Pod pod-configmaps-c1e2583e-91e5-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:11:01.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pkdgh" for this suite. May 9 11:11:07.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:11:07.607: INFO: namespace: e2e-tests-configmap-pkdgh, resource: bindings, ignored listing per whitelist May 9 11:11:07.634: INFO: namespace e2e-tests-configmap-pkdgh deletion completed in 6.087130855s • [SLOW TEST:10.243 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:11:07.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-c8009fcb-91e5-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume secrets May 9 11:11:07.743: INFO: Waiting up to 5m0s for pod "pod-secrets-c8025d61-91e5-11ea-a20c-0242ac110018" in namespace "e2e-tests-secrets-cm7l7" to be "success or failure" May 9 11:11:07.747: INFO: Pod "pod-secrets-c8025d61-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.567615ms May 9 11:11:09.751: INFO: Pod "pod-secrets-c8025d61-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007749788s May 9 11:11:11.755: INFO: Pod "pod-secrets-c8025d61-91e5-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011745041s STEP: Saw pod success May 9 11:11:11.755: INFO: Pod "pod-secrets-c8025d61-91e5-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:11:11.758: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-c8025d61-91e5-11ea-a20c-0242ac110018 container secret-volume-test: STEP: delete the pod May 9 11:11:11.777: INFO: Waiting for pod pod-secrets-c8025d61-91e5-11ea-a20c-0242ac110018 to disappear May 9 11:11:11.781: INFO: Pod pod-secrets-c8025d61-91e5-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:11:11.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-cm7l7" for this suite. May 9 11:11:17.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:11:17.813: INFO: namespace: e2e-tests-secrets-cm7l7, resource: bindings, ignored listing per whitelist May 9 11:11:17.880: INFO: namespace e2e-tests-secrets-cm7l7 deletion completed in 6.096120466s • [SLOW TEST:10.246 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:11:17.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-txmbb.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-txmbb.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-txmbb.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-txmbb.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-txmbb.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-txmbb.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 9 11:11:24.108: INFO: DNS probes using e2e-tests-dns-txmbb/dns-test-ce1f634c-91e5-11ea-a20c-0242ac110018 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:11:24.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-txmbb" for this suite. May 9 11:11:30.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:11:30.203: INFO: namespace: e2e-tests-dns-txmbb, resource: bindings, ignored listing per whitelist May 9 11:11:30.265: INFO: namespace e2e-tests-dns-txmbb deletion completed in 6.086743157s • [SLOW TEST:12.385 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:11:30.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 9 11:11:30.402: INFO: Waiting up to 5m0s for pod "var-expansion-d57fce05-91e5-11ea-a20c-0242ac110018" in namespace "e2e-tests-var-expansion-l46qn" to be "success or failure" May 9 11:11:30.438: INFO: Pod "var-expansion-d57fce05-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 36.697419ms May 9 11:11:32.442: INFO: Pod "var-expansion-d57fce05-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040762575s May 9 11:11:34.446: INFO: Pod "var-expansion-d57fce05-91e5-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044689664s STEP: Saw pod success May 9 11:11:34.446: INFO: Pod "var-expansion-d57fce05-91e5-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:11:34.449: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-d57fce05-91e5-11ea-a20c-0242ac110018 container dapi-container: STEP: delete the pod May 9 11:11:34.563: INFO: Waiting for pod var-expansion-d57fce05-91e5-11ea-a20c-0242ac110018 to disappear May 9 11:11:34.572: INFO: Pod var-expansion-d57fce05-91e5-11ea-a20c-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:11:34.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-l46qn" for this suite. May 9 11:11:42.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:11:42.654: INFO: namespace: e2e-tests-var-expansion-l46qn, resource: bindings, ignored listing per whitelist May 9 11:11:42.690: INFO: namespace e2e-tests-var-expansion-l46qn deletion completed in 8.113936476s • [SLOW TEST:12.425 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:11:42.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 11:11:42.850: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:11:43.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-jq464" for this suite. May 9 11:11:49.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:11:50.048: INFO: namespace: e2e-tests-custom-resource-definition-jq464, resource: bindings, ignored listing per whitelist May 9 11:11:50.051: INFO: namespace e2e-tests-custom-resource-definition-jq464 deletion completed in 6.111619285s • [SLOW TEST:7.360 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:11:50.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-e144f55a-91e5-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 11:11:50.166: INFO: Waiting up to 5m0s for pod "pod-configmaps-e1480686-91e5-11ea-a20c-0242ac110018" in namespace "e2e-tests-configmap-hh45s" to be "success or failure" May 9 11:11:50.170: INFO: Pod "pod-configmaps-e1480686-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.532703ms May 9 11:11:52.174: INFO: Pod "pod-configmaps-e1480686-91e5-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007533843s May 9 11:11:54.178: INFO: Pod "pod-configmaps-e1480686-91e5-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.011778951s May 9 11:11:56.182: INFO: Pod "pod-configmaps-e1480686-91e5-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015654405s STEP: Saw pod success May 9 11:11:56.182: INFO: Pod "pod-configmaps-e1480686-91e5-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:11:56.185: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-e1480686-91e5-11ea-a20c-0242ac110018 container configmap-volume-test: STEP: delete the pod May 9 11:11:56.211: INFO: Waiting for pod pod-configmaps-e1480686-91e5-11ea-a20c-0242ac110018 to disappear May 9 11:11:56.227: INFO: Pod pod-configmaps-e1480686-91e5-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:11:56.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hh45s" for this suite. May 9 11:12:02.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:12:02.290: INFO: namespace: e2e-tests-configmap-hh45s, resource: bindings, ignored listing per whitelist May 9 11:12:02.337: INFO: namespace e2e-tests-configmap-hh45s deletion completed in 6.107069476s • [SLOW TEST:12.287 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:12:02.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 9 11:12:02.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:02.973: INFO: stderr: "" May 9 11:12:02.973: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 9 11:12:02.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:03.160: INFO: stderr: "" May 9 11:12:03.160: INFO: stdout: "update-demo-nautilus-wmcbl update-demo-nautilus-zfpg8 " May 9 11:12:03.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wmcbl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:03.337: INFO: stderr: "" May 9 11:12:03.337: INFO: stdout: "" May 9 11:12:03.337: INFO: update-demo-nautilus-wmcbl is created but not running May 9 11:12:08.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:08.540: INFO: stderr: "" May 9 11:12:08.540: INFO: stdout: "update-demo-nautilus-wmcbl update-demo-nautilus-zfpg8 " May 9 11:12:08.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wmcbl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:08.687: INFO: stderr: "" May 9 11:12:08.688: INFO: stdout: "true" May 9 11:12:08.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wmcbl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:08.795: INFO: stderr: "" May 9 11:12:08.795: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 11:12:08.795: INFO: validating pod update-demo-nautilus-wmcbl May 9 11:12:08.799: INFO: got data: { "image": "nautilus.jpg" } May 9 11:12:08.799: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 11:12:08.799: INFO: update-demo-nautilus-wmcbl is verified up and running May 9 11:12:08.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zfpg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:08.892: INFO: stderr: "" May 9 11:12:08.892: INFO: stdout: "true" May 9 11:12:08.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zfpg8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:09.004: INFO: stderr: "" May 9 11:12:09.005: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 11:12:09.005: INFO: validating pod update-demo-nautilus-zfpg8 May 9 11:12:09.009: INFO: got data: { "image": "nautilus.jpg" } May 9 11:12:09.009: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 11:12:09.009: INFO: update-demo-nautilus-zfpg8 is verified up and running STEP: scaling down the replication controller May 9 11:12:09.011: INFO: scanned /root for discovery docs: May 9 11:12:09.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:10.141: INFO: stderr: "" May 9 11:12:10.141: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 9 11:12:10.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:10.254: INFO: stderr: "" May 9 11:12:10.255: INFO: stdout: "update-demo-nautilus-wmcbl update-demo-nautilus-zfpg8 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 9 11:12:15.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:15.381: INFO: stderr: "" May 9 11:12:15.382: INFO: stdout: "update-demo-nautilus-wmcbl update-demo-nautilus-zfpg8 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 9 11:12:20.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:20.500: INFO: stderr: "" May 9 11:12:20.500: INFO: stdout: "update-demo-nautilus-wmcbl update-demo-nautilus-zfpg8 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 9 11:12:25.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:25.600: INFO: stderr: "" May 9 11:12:25.600: INFO: stdout: "update-demo-nautilus-zfpg8 " May 9 11:12:25.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zfpg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:25.736: INFO: stderr: "" May 9 11:12:25.736: INFO: stdout: "true" May 9 11:12:25.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zfpg8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:25.847: INFO: stderr: "" May 9 11:12:25.847: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 11:12:25.847: INFO: validating pod update-demo-nautilus-zfpg8 May 9 11:12:25.851: INFO: got data: { "image": "nautilus.jpg" } May 9 11:12:25.851: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 11:12:25.851: INFO: update-demo-nautilus-zfpg8 is verified up and running STEP: scaling up the replication controller May 9 11:12:25.854: INFO: scanned /root for discovery docs: May 9 11:12:25.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:26.980: INFO: stderr: "" May 9 11:12:26.980: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 9 11:12:26.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:27.085: INFO: stderr: "" May 9 11:12:27.085: INFO: stdout: "update-demo-nautilus-wnq2s update-demo-nautilus-zfpg8 " May 9 11:12:27.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wnq2s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:27.190: INFO: stderr: "" May 9 11:12:27.190: INFO: stdout: "" May 9 11:12:27.190: INFO: update-demo-nautilus-wnq2s is created but not running May 9 11:12:32.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:32.306: INFO: stderr: "" May 9 11:12:32.306: INFO: stdout: "update-demo-nautilus-wnq2s update-demo-nautilus-zfpg8 " May 9 11:12:32.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wnq2s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:32.406: INFO: stderr: "" May 9 11:12:32.406: INFO: stdout: "true" May 9 11:12:32.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wnq2s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:32.494: INFO: stderr: "" May 9 11:12:32.494: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 11:12:32.494: INFO: validating pod update-demo-nautilus-wnq2s May 9 11:12:32.498: INFO: got data: { "image": "nautilus.jpg" } May 9 11:12:32.498: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 11:12:32.498: INFO: update-demo-nautilus-wnq2s is verified up and running May 9 11:12:32.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zfpg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:32.591: INFO: stderr: "" May 9 11:12:32.591: INFO: stdout: "true" May 9 11:12:32.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zfpg8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:32.689: INFO: stderr: "" May 9 11:12:32.689: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 11:12:32.689: INFO: validating pod update-demo-nautilus-zfpg8 May 9 11:12:32.692: INFO: got data: { "image": "nautilus.jpg" } May 9 11:12:32.692: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 11:12:32.692: INFO: update-demo-nautilus-zfpg8 is verified up and running STEP: using delete to clean up resources May 9 11:12:32.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:32.801: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 11:12:32.801: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 9 11:12:32.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-vj6vp' May 9 11:12:33.002: INFO: stderr: "No resources found.\n" May 9 11:12:33.002: INFO: stdout: "" May 9 11:12:33.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-vj6vp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 9 11:12:33.205: INFO: stderr: "" May 9 11:12:33.205: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:12:33.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vj6vp" for this suite. May 9 11:12:57.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:12:57.252: INFO: namespace: e2e-tests-kubectl-vj6vp, resource: bindings, ignored listing per whitelist May 9 11:12:57.308: INFO: namespace e2e-tests-kubectl-vj6vp deletion completed in 24.098171311s • [SLOW TEST:54.970 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:12:57.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 11:12:57.498: INFO: Waiting up to 5m0s for pod "downwardapi-volume-096cf594-91e6-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-2twbx" to be "success or failure" May 9 11:12:57.504: INFO: Pod "downwardapi-volume-096cf594-91e6-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.677201ms May 9 11:12:59.517: INFO: Pod "downwardapi-volume-096cf594-91e6-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01847816s May 9 11:13:01.550: INFO: Pod "downwardapi-volume-096cf594-91e6-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052036884s May 9 11:13:03.554: INFO: Pod "downwardapi-volume-096cf594-91e6-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055942904s STEP: Saw pod success May 9 11:13:03.554: INFO: Pod "downwardapi-volume-096cf594-91e6-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:13:03.557: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-096cf594-91e6-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 11:13:03.611: INFO: Waiting for pod downwardapi-volume-096cf594-91e6-11ea-a20c-0242ac110018 to disappear May 9 11:13:03.640: INFO: Pod downwardapi-volume-096cf594-91e6-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:13:03.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2twbx" for this suite. May 9 11:13:09.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:13:09.703: INFO: namespace: e2e-tests-projected-2twbx, resource: bindings, ignored listing per whitelist May 9 11:13:09.726: INFO: namespace e2e-tests-projected-2twbx deletion completed in 6.083758262s • [SLOW TEST:12.419 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:13:09.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 9 11:13:10.531: INFO: Pod name wrapped-volume-race-1129ae60-91e6-11ea-a20c-0242ac110018: Found 0 pods out of 5 May 9 11:13:15.537: INFO: Pod name wrapped-volume-race-1129ae60-91e6-11ea-a20c-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1129ae60-91e6-11ea-a20c-0242ac110018 in namespace e2e-tests-emptydir-wrapper-swpgd, will wait for the garbage collector to delete the pods May 9 11:15:07.617: INFO: Deleting ReplicationController wrapped-volume-race-1129ae60-91e6-11ea-a20c-0242ac110018 took: 7.001518ms May 9 11:15:07.717: INFO: Terminating ReplicationController wrapped-volume-race-1129ae60-91e6-11ea-a20c-0242ac110018 pods took: 100.235064ms STEP: Creating RC which spawns configmap-volume pods May 9 11:15:51.587: INFO: Pod name wrapped-volume-race-71281c24-91e6-11ea-a20c-0242ac110018: Found 0 pods out of 5 May 9 11:15:56.594: INFO: Pod name wrapped-volume-race-71281c24-91e6-11ea-a20c-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-71281c24-91e6-11ea-a20c-0242ac110018 in namespace e2e-tests-emptydir-wrapper-swpgd, will wait for the garbage collector to delete the pods May 9 11:17:48.690: INFO: Deleting ReplicationController wrapped-volume-race-71281c24-91e6-11ea-a20c-0242ac110018 took: 7.440967ms May 9 11:17:48.890: INFO: Terminating ReplicationController wrapped-volume-race-71281c24-91e6-11ea-a20c-0242ac110018 pods took: 200.346047ms STEP: Creating RC which spawns configmap-volume pods May 9 11:18:31.523: INFO: Pod name wrapped-volume-race-d0820cbe-91e6-11ea-a20c-0242ac110018: Found 0 pods out of 5 May 9 11:18:36.532: INFO: Pod name wrapped-volume-race-d0820cbe-91e6-11ea-a20c-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d0820cbe-91e6-11ea-a20c-0242ac110018 in namespace e2e-tests-emptydir-wrapper-swpgd, will wait for the garbage collector to delete the pods May 9 11:21:10.613: INFO: Deleting ReplicationController wrapped-volume-race-d0820cbe-91e6-11ea-a20c-0242ac110018 took: 7.552308ms May 9 11:21:10.814: INFO: Terminating ReplicationController wrapped-volume-race-d0820cbe-91e6-11ea-a20c-0242ac110018 pods took: 200.28911ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:21:53.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-swpgd" for this suite. May 9 11:22:01.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:22:01.226: INFO: namespace: e2e-tests-emptydir-wrapper-swpgd, resource: bindings, ignored listing per whitelist May 9 11:22:01.248: INFO: namespace e2e-tests-emptydir-wrapper-swpgd deletion completed in 8.110386235s • [SLOW TEST:531.521 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:22:01.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-4d98948f-91e7-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 11:22:01.372: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4d9a3d79-91e7-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-grnwf" to be "success or failure" May 9 11:22:01.395: INFO: Pod "pod-projected-configmaps-4d9a3d79-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.316748ms May 9 11:22:03.419: INFO: Pod "pod-projected-configmaps-4d9a3d79-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047115265s May 9 11:22:05.455: INFO: Pod "pod-projected-configmaps-4d9a3d79-91e7-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082669932s STEP: Saw pod success May 9 11:22:05.455: INFO: Pod "pod-projected-configmaps-4d9a3d79-91e7-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:22:05.458: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-4d9a3d79-91e7-11ea-a20c-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 9 11:22:05.497: INFO: Waiting for pod pod-projected-configmaps-4d9a3d79-91e7-11ea-a20c-0242ac110018 to disappear May 9 11:22:05.514: INFO: Pod pod-projected-configmaps-4d9a3d79-91e7-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:22:05.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-grnwf" for this suite. May 9 11:22:11.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:22:11.595: INFO: namespace: e2e-tests-projected-grnwf, resource: bindings, ignored listing per whitelist May 9 11:22:11.605: INFO: namespace e2e-tests-projected-grnwf deletion completed in 6.087318404s • [SLOW TEST:10.358 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:22:11.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-53c5ad28-91e7-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 11:22:11.730: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-53c6248b-91e7-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-dwf66" to be "success or failure" May 9 11:22:11.747: INFO: Pod "pod-projected-configmaps-53c6248b-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.521027ms May 9 11:22:13.790: INFO: Pod "pod-projected-configmaps-53c6248b-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060512269s May 9 11:22:15.846: INFO: Pod "pod-projected-configmaps-53c6248b-91e7-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116109238s STEP: Saw pod success May 9 11:22:15.846: INFO: Pod "pod-projected-configmaps-53c6248b-91e7-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:22:15.849: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-53c6248b-91e7-11ea-a20c-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 9 11:22:15.929: INFO: Waiting for pod pod-projected-configmaps-53c6248b-91e7-11ea-a20c-0242ac110018 to disappear May 9 11:22:15.933: INFO: Pod pod-projected-configmaps-53c6248b-91e7-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:22:15.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dwf66" for this suite. May 9 11:22:21.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:22:22.008: INFO: namespace: e2e-tests-projected-dwf66, resource: bindings, ignored listing per whitelist May 9 11:22:22.049: INFO: namespace e2e-tests-projected-dwf66 deletion completed in 6.112282807s • [SLOW TEST:10.444 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:22:22.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-jgmc2 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-jgmc2 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-jgmc2 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-jgmc2 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-jgmc2 May 9 11:22:26.228: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-jgmc2, name: ss-0, uid: 5a3b3041-91e7-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 9 11:22:31.246: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-jgmc2, name: ss-0, uid: 5a3b3041-91e7-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 9 11:22:31.259: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-jgmc2, name: ss-0, uid: 5a3b3041-91e7-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 9 11:22:31.287: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-jgmc2 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-jgmc2 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-jgmc2 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 9 11:22:45.405: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jgmc2 May 9 11:22:45.407: INFO: Scaling statefulset ss to 0 May 9 11:22:55.423: INFO: Waiting for statefulset status.replicas updated to 0 May 9 11:22:55.427: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:22:55.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-jgmc2" for this suite. May 9 11:23:01.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:23:01.488: INFO: namespace: e2e-tests-statefulset-jgmc2, resource: bindings, ignored listing per whitelist May 9 11:23:01.538: INFO: namespace e2e-tests-statefulset-jgmc2 deletion completed in 6.09604644s • [SLOW TEST:39.489 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:23:01.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 11:23:01.757: INFO: Waiting up to 5m0s for pod "downwardapi-volume-718e212f-91e7-11ea-a20c-0242ac110018" in namespace "e2e-tests-downward-api-ssgqd" to be "success or failure" May 9 11:23:01.759: INFO: Pod "downwardapi-volume-718e212f-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.502341ms May 9 11:23:03.763: INFO: Pod "downwardapi-volume-718e212f-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005946331s May 9 11:23:05.767: INFO: Pod "downwardapi-volume-718e212f-91e7-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010264216s STEP: Saw pod success May 9 11:23:05.767: INFO: Pod "downwardapi-volume-718e212f-91e7-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:23:05.770: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-718e212f-91e7-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 11:23:05.805: INFO: Waiting for pod downwardapi-volume-718e212f-91e7-11ea-a20c-0242ac110018 to disappear May 9 11:23:05.833: INFO: Pod downwardapi-volume-718e212f-91e7-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:23:05.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ssgqd" for this suite. May 9 11:23:11.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:23:11.867: INFO: namespace: e2e-tests-downward-api-ssgqd, resource: bindings, ignored listing per whitelist May 9 11:23:11.940: INFO: namespace e2e-tests-downward-api-ssgqd deletion completed in 6.10300265s • [SLOW TEST:10.401 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:23:11.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 9 11:23:12.087: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-h4kw4,SelfLink:/api/v1/namespaces/e2e-tests-watch-h4kw4/configmaps/e2e-watch-test-watch-closed,UID:77be2d36-91e7-11ea-99e8-0242ac110002,ResourceVersion:9579748,Generation:0,CreationTimestamp:2020-05-09 11:23:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 9 11:23:12.088: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-h4kw4,SelfLink:/api/v1/namespaces/e2e-tests-watch-h4kw4/configmaps/e2e-watch-test-watch-closed,UID:77be2d36-91e7-11ea-99e8-0242ac110002,ResourceVersion:9579749,Generation:0,CreationTimestamp:2020-05-09 11:23:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 9 11:23:12.135: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-h4kw4,SelfLink:/api/v1/namespaces/e2e-tests-watch-h4kw4/configmaps/e2e-watch-test-watch-closed,UID:77be2d36-91e7-11ea-99e8-0242ac110002,ResourceVersion:9579750,Generation:0,CreationTimestamp:2020-05-09 11:23:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 9 11:23:12.135: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-h4kw4,SelfLink:/api/v1/namespaces/e2e-tests-watch-h4kw4/configmaps/e2e-watch-test-watch-closed,UID:77be2d36-91e7-11ea-99e8-0242ac110002,ResourceVersion:9579751,Generation:0,CreationTimestamp:2020-05-09 11:23:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:23:12.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-h4kw4" for this suite. May 9 11:23:18.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:23:18.271: INFO: namespace: e2e-tests-watch-h4kw4, resource: bindings, ignored listing per whitelist May 9 11:23:18.290: INFO: namespace e2e-tests-watch-h4kw4 deletion completed in 6.107636244s • [SLOW TEST:6.350 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:23:18.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 11:23:18.416: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b866b3f-91e7-11ea-a20c-0242ac110018" in namespace "e2e-tests-downward-api-h9hb7" to be "success or failure" May 9 11:23:18.431: INFO: Pod "downwardapi-volume-7b866b3f-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.576666ms May 9 11:23:20.435: INFO: Pod "downwardapi-volume-7b866b3f-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01846957s May 9 11:23:22.439: INFO: Pod "downwardapi-volume-7b866b3f-91e7-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023158195s STEP: Saw pod success May 9 11:23:22.439: INFO: Pod "downwardapi-volume-7b866b3f-91e7-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:23:22.442: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7b866b3f-91e7-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 11:23:22.505: INFO: Waiting for pod downwardapi-volume-7b866b3f-91e7-11ea-a20c-0242ac110018 to disappear May 9 11:23:22.565: INFO: Pod downwardapi-volume-7b866b3f-91e7-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:23:22.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-h9hb7" for this suite. May 9 11:23:28.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:23:28.642: INFO: namespace: e2e-tests-downward-api-h9hb7, resource: bindings, ignored listing per whitelist May 9 11:23:28.657: INFO: namespace e2e-tests-downward-api-h9hb7 deletion completed in 6.087947743s • [SLOW TEST:10.367 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:23:28.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 9 11:23:28.776: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:23:28.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wbh2w" for this suite. May 9 11:23:34.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:23:34.914: INFO: namespace: e2e-tests-kubectl-wbh2w, resource: bindings, ignored listing per whitelist May 9 11:23:34.933: INFO: namespace e2e-tests-kubectl-wbh2w deletion completed in 6.067415942s • [SLOW TEST:6.275 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:23:34.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 9 11:23:35.039: INFO: Waiting up to 5m0s for pod "client-containers-856e1b8d-91e7-11ea-a20c-0242ac110018" in namespace "e2e-tests-containers-cvhrg" to be "success or failure" May 9 11:23:35.098: INFO: Pod "client-containers-856e1b8d-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 59.259831ms May 9 11:23:37.103: INFO: Pod "client-containers-856e1b8d-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063894361s May 9 11:23:39.107: INFO: Pod "client-containers-856e1b8d-91e7-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068111521s STEP: Saw pod success May 9 11:23:39.107: INFO: Pod "client-containers-856e1b8d-91e7-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:23:39.110: INFO: Trying to get logs from node hunter-worker pod client-containers-856e1b8d-91e7-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 11:23:39.164: INFO: Waiting for pod client-containers-856e1b8d-91e7-11ea-a20c-0242ac110018 to disappear May 9 11:23:39.168: INFO: Pod client-containers-856e1b8d-91e7-11ea-a20c-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:23:39.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-cvhrg" for this suite. May 9 11:23:45.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:23:45.283: INFO: namespace: e2e-tests-containers-cvhrg, resource: bindings, ignored listing per whitelist May 9 11:23:45.304: INFO: namespace e2e-tests-containers-cvhrg deletion completed in 6.13137246s • [SLOW TEST:10.371 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:23:45.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 11:23:45.396: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 9 11:23:45.427: INFO: Pod name sample-pod: Found 0 pods out of 1 May 9 11:23:50.431: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 9 11:23:50.431: INFO: Creating deployment "test-rolling-update-deployment" May 9 11:23:50.436: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 9 11:23:50.448: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 9 11:23:52.462: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 9 11:23:52.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724620230, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724620230, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724620230, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724620230, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 11:23:54.469: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 9 11:23:54.478: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-kpghr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kpghr/deployments/test-rolling-update-deployment,UID:8e9cc626-91e7-11ea-99e8-0242ac110002,ResourceVersion:9579939,Generation:1,CreationTimestamp:2020-05-09 11:23:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-09 11:23:50 +0000 UTC 2020-05-09 11:23:50 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-09 11:23:53 +0000 UTC 2020-05-09 11:23:50 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 9 11:23:54.482: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-kpghr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kpghr/replicasets/test-rolling-update-deployment-75db98fb4c,UID:8ea08dbe-91e7-11ea-99e8-0242ac110002,ResourceVersion:9579930,Generation:1,CreationTimestamp:2020-05-09 11:23:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 8e9cc626-91e7-11ea-99e8-0242ac110002 0xc0023b72c7 0xc0023b72c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 9 11:23:54.482: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 9 11:23:54.482: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-kpghr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kpghr/replicasets/test-rolling-update-controller,UID:8b9c6cc8-91e7-11ea-99e8-0242ac110002,ResourceVersion:9579938,Generation:2,CreationTimestamp:2020-05-09 11:23:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 8e9cc626-91e7-11ea-99e8-0242ac110002 0xc0023b7207 0xc0023b7208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 9 11:23:54.485: INFO: Pod "test-rolling-update-deployment-75db98fb4c-p6zsb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-p6zsb,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-kpghr,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kpghr/pods/test-rolling-update-deployment-75db98fb4c-p6zsb,UID:8ea121bc-91e7-11ea-99e8-0242ac110002,ResourceVersion:9579929,Generation:0,CreationTimestamp:2020-05-09 11:23:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 8ea08dbe-91e7-11ea-99e8-0242ac110002 0xc0023b7be7 0xc0023b7be8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gctg8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gctg8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-gctg8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023b7c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023b7c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 11:23:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 11:23:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 11:23:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 11:23:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.197,StartTime:2020-05-09 11:23:50 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-09 11:23:53 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://bbcc8e454a779cc72be7907bc968ca315a8f154e61d197bd615ee5f738af54e8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:23:54.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-kpghr" for this suite. May 9 11:24:00.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:24:00.591: INFO: namespace: e2e-tests-deployment-kpghr, resource: bindings, ignored listing per whitelist May 9 11:24:00.635: INFO: namespace e2e-tests-deployment-kpghr deletion completed in 6.146499513s • [SLOW TEST:15.331 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:24:00.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-94c1e988-91e7-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume secrets May 9 11:24:00.753: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-94c272e7-91e7-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-bzs49" to be "success or failure" May 9 11:24:00.757: INFO: Pod "pod-projected-secrets-94c272e7-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.542248ms May 9 11:24:02.761: INFO: Pod "pod-projected-secrets-94c272e7-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008218646s May 9 11:24:04.766: INFO: Pod "pod-projected-secrets-94c272e7-91e7-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013181877s STEP: Saw pod success May 9 11:24:04.766: INFO: Pod "pod-projected-secrets-94c272e7-91e7-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:24:04.770: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-94c272e7-91e7-11ea-a20c-0242ac110018 container secret-volume-test: STEP: delete the pod May 9 11:24:04.796: INFO: Waiting for pod pod-projected-secrets-94c272e7-91e7-11ea-a20c-0242ac110018 to disappear May 9 11:24:04.800: INFO: Pod pod-projected-secrets-94c272e7-91e7-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:24:04.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bzs49" for this suite. May 9 11:24:10.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:24:10.887: INFO: namespace: e2e-tests-projected-bzs49, resource: bindings, ignored listing per whitelist May 9 11:24:10.978: INFO: namespace e2e-tests-projected-bzs49 deletion completed in 6.136150222s • [SLOW TEST:10.342 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:24:10.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 9 11:24:11.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-qc7p7' May 9 11:24:13.338: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 9 11:24:13.338: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 9 11:24:17.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-qc7p7' May 9 11:24:17.473: INFO: stderr: "" May 9 11:24:17.473: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:24:17.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qc7p7" for this suite. May 9 11:24:23.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:24:23.559: INFO: namespace: e2e-tests-kubectl-qc7p7, resource: bindings, ignored listing per whitelist May 9 11:24:23.588: INFO: namespace e2e-tests-kubectl-qc7p7 deletion completed in 6.112859982s • [SLOW TEST:12.610 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:24:23.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0509 11:24:54.235291 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 9 11:24:54.235: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:24:54.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-5mxww" for this suite. May 9 11:25:00.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:25:00.371: INFO: namespace: e2e-tests-gc-5mxww, resource: bindings, ignored listing per whitelist May 9 11:25:00.397: INFO: namespace e2e-tests-gc-5mxww deletion completed in 6.158677043s • [SLOW TEST:36.809 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:25:00.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 9 11:25:00.748: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-4wkgh" to be "success or failure" May 9 11:25:00.759: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207062ms May 9 11:25:02.763: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014102082s May 9 11:25:04.770: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021139772s May 9 11:25:06.774: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025523647s STEP: Saw pod success May 9 11:25:06.774: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 9 11:25:06.778: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 9 11:25:06.815: INFO: Waiting for pod pod-host-path-test to disappear May 9 11:25:06.818: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:25:06.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-4wkgh" for this suite. May 9 11:25:12.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:25:12.878: INFO: namespace: e2e-tests-hostpath-4wkgh, resource: bindings, ignored listing per whitelist May 9 11:25:12.912: INFO: namespace e2e-tests-hostpath-4wkgh deletion completed in 6.091203524s • [SLOW TEST:12.514 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:25:12.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-bfd35a97-91e7-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume secrets May 9 11:25:13.013: INFO: Waiting up to 5m0s for pod "pod-secrets-bfd3ed04-91e7-11ea-a20c-0242ac110018" in namespace "e2e-tests-secrets-pbcl6" to be "success or failure" May 9 11:25:13.028: INFO: Pod "pod-secrets-bfd3ed04-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.815106ms May 9 11:25:15.035: INFO: Pod "pod-secrets-bfd3ed04-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022110014s May 9 11:25:17.039: INFO: Pod "pod-secrets-bfd3ed04-91e7-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026904275s STEP: Saw pod success May 9 11:25:17.040: INFO: Pod "pod-secrets-bfd3ed04-91e7-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:25:17.043: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-bfd3ed04-91e7-11ea-a20c-0242ac110018 container secret-volume-test: STEP: delete the pod May 9 11:25:17.088: INFO: Waiting for pod pod-secrets-bfd3ed04-91e7-11ea-a20c-0242ac110018 to disappear May 9 11:25:17.090: INFO: Pod pod-secrets-bfd3ed04-91e7-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:25:17.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-pbcl6" for this suite. May 9 11:25:23.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:25:23.168: INFO: namespace: e2e-tests-secrets-pbcl6, resource: bindings, ignored listing per whitelist May 9 11:25:23.197: INFO: namespace e2e-tests-secrets-pbcl6 deletion completed in 6.103247739s • [SLOW TEST:10.285 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:25:23.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w5p8g A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-w5p8g;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w5p8g A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-w5p8g;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w5p8g.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-w5p8g.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w5p8g.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-w5p8g.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w5p8g.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-w5p8g.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w5p8g.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-w5p8g.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-w5p8g.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 220.145.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.145.220_udp@PTR;check="$$(dig +tcp +noall +answer +search 220.145.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.145.220_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w5p8g A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-w5p8g;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w5p8g A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w5p8g.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-w5p8g.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w5p8g.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w5p8g.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-w5p8g.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w5p8g.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-w5p8g.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-w5p8g.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 220.145.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.145.220_udp@PTR;check="$$(dig +tcp +noall +answer +search 220.145.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.145.220_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 9 11:25:29.561: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:29.563: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:29.577: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:29.598: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:29.601: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:29.603: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w5p8g from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:29.605: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:29.608: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:29.610: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:29.614: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:29.616: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:29.632: INFO: Lookups using e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-w5p8g jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g jessie_udp@dns-test-service.e2e-tests-dns-w5p8g.svc jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc] May 9 11:25:34.638: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:34.642: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:34.658: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:34.684: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:34.687: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:34.690: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w5p8g from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:34.692: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:34.695: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:34.698: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:34.701: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:34.704: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:34.723: INFO: Lookups using e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-w5p8g jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g jessie_udp@dns-test-service.e2e-tests-dns-w5p8g.svc jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc] May 9 11:25:39.642: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:39.647: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:39.660: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:39.675: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:39.677: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:39.679: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w5p8g from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:39.681: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:39.683: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:39.686: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:39.688: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:39.691: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:39.707: INFO: Lookups using e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-w5p8g jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g jessie_udp@dns-test-service.e2e-tests-dns-w5p8g.svc jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc] May 9 11:25:44.638: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:44.642: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:44.657: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:44.679: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:44.682: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:44.684: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w5p8g from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:44.686: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:44.688: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:44.690: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:44.693: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:44.695: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:44.710: INFO: Lookups using e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-w5p8g jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g jessie_udp@dns-test-service.e2e-tests-dns-w5p8g.svc jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc] May 9 11:25:49.638: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:49.641: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:49.657: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:49.678: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:49.680: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:49.683: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w5p8g from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:49.685: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:49.688: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:49.690: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:49.693: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:49.696: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:49.711: INFO: Lookups using e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-w5p8g jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g jessie_udp@dns-test-service.e2e-tests-dns-w5p8g.svc jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc] May 9 11:25:54.637: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:54.640: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:54.655: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:54.682: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:54.685: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:54.688: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w5p8g from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:54.691: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:54.694: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:54.697: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:54.699: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:54.702: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc from pod e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018: the server could not find the requested resource (get pods dns-test-c6091097-91e7-11ea-a20c-0242ac110018) May 9 11:25:54.720: INFO: Lookups using e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-w5p8g jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g jessie_udp@dns-test-service.e2e-tests-dns-w5p8g.svc jessie_tcp@dns-test-service.e2e-tests-dns-w5p8g.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w5p8g.svc] May 9 11:25:59.711: INFO: DNS probes using e2e-tests-dns-w5p8g/dns-test-c6091097-91e7-11ea-a20c-0242ac110018 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:26:00.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-w5p8g" for this suite. May 9 11:26:06.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:26:06.716: INFO: namespace: e2e-tests-dns-w5p8g, resource: bindings, ignored listing per whitelist May 9 11:26:06.769: INFO: namespace e2e-tests-dns-w5p8g deletion completed in 6.28412749s • [SLOW TEST:43.572 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:26:06.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 11:26:06.906: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dff3f9c6-91e7-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-5b4v2" to be "success or failure" May 9 11:26:06.929: INFO: Pod "downwardapi-volume-dff3f9c6-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.821899ms May 9 11:26:08.969: INFO: Pod "downwardapi-volume-dff3f9c6-91e7-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062559444s May 9 11:26:10.974: INFO: Pod "downwardapi-volume-dff3f9c6-91e7-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067233736s STEP: Saw pod success May 9 11:26:10.974: INFO: Pod "downwardapi-volume-dff3f9c6-91e7-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:26:10.977: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-dff3f9c6-91e7-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 11:26:11.032: INFO: Waiting for pod downwardapi-volume-dff3f9c6-91e7-11ea-a20c-0242ac110018 to disappear May 9 11:26:11.054: INFO: Pod downwardapi-volume-dff3f9c6-91e7-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:26:11.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5b4v2" for this suite. May 9 11:26:17.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:26:17.174: INFO: namespace: e2e-tests-projected-5b4v2, resource: bindings, ignored listing per whitelist May 9 11:26:17.182: INFO: namespace e2e-tests-projected-5b4v2 deletion completed in 6.115461832s • [SLOW TEST:10.412 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:26:17.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-cq426 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-cq426 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-cq426 May 9 11:26:17.309: INFO: Found 0 stateful pods, waiting for 1 May 9 11:26:27.315: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 9 11:26:27.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cq426 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 9 11:26:27.575: INFO: stderr: "I0509 11:26:27.451798 1658 log.go:172] (0xc000162840) (0xc000687360) Create stream\nI0509 11:26:27.451851 1658 log.go:172] (0xc000162840) (0xc000687360) Stream added, broadcasting: 1\nI0509 11:26:27.454429 1658 log.go:172] (0xc000162840) Reply frame received for 1\nI0509 11:26:27.454498 1658 log.go:172] (0xc000162840) (0xc00070e000) Create stream\nI0509 11:26:27.454520 1658 log.go:172] (0xc000162840) (0xc00070e000) Stream added, broadcasting: 3\nI0509 11:26:27.455283 1658 log.go:172] (0xc000162840) Reply frame received for 3\nI0509 11:26:27.455308 1658 log.go:172] (0xc000162840) (0xc000687400) Create stream\nI0509 11:26:27.455316 1658 log.go:172] (0xc000162840) (0xc000687400) Stream added, broadcasting: 5\nI0509 11:26:27.456161 1658 log.go:172] (0xc000162840) Reply frame received for 5\nI0509 11:26:27.569364 1658 log.go:172] (0xc000162840) Data frame received for 3\nI0509 11:26:27.569411 1658 log.go:172] (0xc00070e000) (3) Data frame handling\nI0509 11:26:27.569443 1658 log.go:172] (0xc00070e000) (3) Data frame sent\nI0509 11:26:27.569559 1658 log.go:172] (0xc000162840) Data frame received for 3\nI0509 11:26:27.569578 1658 log.go:172] (0xc00070e000) (3) Data frame handling\nI0509 11:26:27.569826 1658 log.go:172] (0xc000162840) Data frame received for 5\nI0509 11:26:27.569852 1658 log.go:172] (0xc000687400) (5) Data frame handling\nI0509 11:26:27.571606 1658 log.go:172] (0xc000162840) Data frame received for 1\nI0509 11:26:27.571616 1658 log.go:172] (0xc000687360) (1) Data frame handling\nI0509 11:26:27.571622 1658 log.go:172] (0xc000687360) (1) Data frame sent\nI0509 11:26:27.571933 1658 log.go:172] (0xc000162840) (0xc000687360) Stream removed, broadcasting: 1\nI0509 11:26:27.572028 1658 log.go:172] (0xc000162840) Go away received\nI0509 11:26:27.572108 1658 log.go:172] (0xc000162840) (0xc000687360) Stream removed, broadcasting: 1\nI0509 11:26:27.572124 1658 log.go:172] (0xc000162840) (0xc00070e000) Stream removed, broadcasting: 3\nI0509 11:26:27.572133 1658 log.go:172] (0xc000162840) (0xc000687400) Stream removed, broadcasting: 5\n" May 9 11:26:27.575: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 9 11:26:27.575: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 9 11:26:27.580: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 9 11:26:37.584: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 9 11:26:37.584: INFO: Waiting for statefulset status.replicas updated to 0 May 9 11:26:37.636: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999539s May 9 11:26:38.642: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.958414145s May 9 11:26:39.646: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.95314777s May 9 11:26:40.651: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.94856937s May 9 11:26:41.676: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.944228834s May 9 11:26:42.748: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.918898172s May 9 11:26:43.752: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.846714401s May 9 11:26:44.756: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.843084583s May 9 11:26:45.759: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.839167333s May 9 11:26:46.762: INFO: Verifying statefulset ss doesn't scale past 1 for another 835.647751ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-cq426 May 9 11:26:47.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cq426 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 9 11:26:47.971: INFO: stderr: "I0509 11:26:47.886151 1681 log.go:172] (0xc00079c160) (0xc0005fa640) Create stream\nI0509 11:26:47.886215 1681 log.go:172] (0xc00079c160) (0xc0005fa640) Stream added, broadcasting: 1\nI0509 11:26:47.889748 1681 log.go:172] (0xc00079c160) Reply frame received for 1\nI0509 11:26:47.889790 1681 log.go:172] (0xc00079c160) (0xc0007d0d20) Create stream\nI0509 11:26:47.889806 1681 log.go:172] (0xc00079c160) (0xc0007d0d20) Stream added, broadcasting: 3\nI0509 11:26:47.890753 1681 log.go:172] (0xc00079c160) Reply frame received for 3\nI0509 11:26:47.890799 1681 log.go:172] (0xc00079c160) (0xc0004d6000) Create stream\nI0509 11:26:47.890824 1681 log.go:172] (0xc00079c160) (0xc0004d6000) Stream added, broadcasting: 5\nI0509 11:26:47.891639 1681 log.go:172] (0xc00079c160) Reply frame received for 5\nI0509 11:26:47.967151 1681 log.go:172] (0xc00079c160) Data frame received for 3\nI0509 11:26:47.967198 1681 log.go:172] (0xc0007d0d20) (3) Data frame handling\nI0509 11:26:47.967211 1681 log.go:172] (0xc0007d0d20) (3) Data frame sent\nI0509 11:26:47.967221 1681 log.go:172] (0xc00079c160) Data frame received for 3\nI0509 11:26:47.967231 1681 log.go:172] (0xc0007d0d20) (3) Data frame handling\nI0509 11:26:47.967265 1681 log.go:172] (0xc00079c160) Data frame received for 5\nI0509 11:26:47.967279 1681 log.go:172] (0xc0004d6000) (5) Data frame handling\nI0509 11:26:47.968702 1681 log.go:172] (0xc00079c160) Data frame received for 1\nI0509 11:26:47.968736 1681 log.go:172] (0xc0005fa640) (1) Data frame handling\nI0509 11:26:47.968751 1681 log.go:172] (0xc0005fa640) (1) Data frame sent\nI0509 11:26:47.968763 1681 log.go:172] (0xc00079c160) (0xc0005fa640) Stream removed, broadcasting: 1\nI0509 11:26:47.968782 1681 log.go:172] (0xc00079c160) Go away received\nI0509 11:26:47.968956 1681 log.go:172] (0xc00079c160) (0xc0005fa640) Stream removed, broadcasting: 1\nI0509 11:26:47.968975 1681 log.go:172] (0xc00079c160) (0xc0007d0d20) Stream removed, broadcasting: 3\nI0509 11:26:47.968986 1681 log.go:172] (0xc00079c160) (0xc0004d6000) Stream removed, broadcasting: 5\n" May 9 11:26:47.971: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 9 11:26:47.972: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 9 11:26:47.975: INFO: Found 1 stateful pods, waiting for 3 May 9 11:26:57.980: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 9 11:26:57.980: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 9 11:26:57.980: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 9 11:26:57.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cq426 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 9 11:26:58.167: INFO: stderr: "I0509 11:26:58.110417 1703 log.go:172] (0xc0008302c0) (0xc000726640) Create stream\nI0509 11:26:58.110486 1703 log.go:172] (0xc0008302c0) (0xc000726640) Stream added, broadcasting: 1\nI0509 11:26:58.112630 1703 log.go:172] (0xc0008302c0) Reply frame received for 1\nI0509 11:26:58.112660 1703 log.go:172] (0xc0008302c0) (0xc00065ac80) Create stream\nI0509 11:26:58.112670 1703 log.go:172] (0xc0008302c0) (0xc00065ac80) Stream added, broadcasting: 3\nI0509 11:26:58.113594 1703 log.go:172] (0xc0008302c0) Reply frame received for 3\nI0509 11:26:58.113618 1703 log.go:172] (0xc0008302c0) (0xc00065adc0) Create stream\nI0509 11:26:58.113625 1703 log.go:172] (0xc0008302c0) (0xc00065adc0) Stream added, broadcasting: 5\nI0509 11:26:58.114311 1703 log.go:172] (0xc0008302c0) Reply frame received for 5\nI0509 11:26:58.161871 1703 log.go:172] (0xc0008302c0) Data frame received for 3\nI0509 11:26:58.161909 1703 log.go:172] (0xc00065ac80) (3) Data frame handling\nI0509 11:26:58.161925 1703 log.go:172] (0xc00065ac80) (3) Data frame sent\nI0509 11:26:58.161936 1703 log.go:172] (0xc0008302c0) Data frame received for 3\nI0509 11:26:58.161944 1703 log.go:172] (0xc00065ac80) (3) Data frame handling\nI0509 11:26:58.161954 1703 log.go:172] (0xc0008302c0) Data frame received for 5\nI0509 11:26:58.161967 1703 log.go:172] (0xc00065adc0) (5) Data frame handling\nI0509 11:26:58.163084 1703 log.go:172] (0xc0008302c0) Data frame received for 1\nI0509 11:26:58.163105 1703 log.go:172] (0xc000726640) (1) Data frame handling\nI0509 11:26:58.163119 1703 log.go:172] (0xc000726640) (1) Data frame sent\nI0509 11:26:58.163134 1703 log.go:172] (0xc0008302c0) (0xc000726640) Stream removed, broadcasting: 1\nI0509 11:26:58.163150 1703 log.go:172] (0xc0008302c0) Go away received\nI0509 11:26:58.163412 1703 log.go:172] (0xc0008302c0) (0xc000726640) Stream removed, broadcasting: 1\nI0509 11:26:58.163438 1703 log.go:172] (0xc0008302c0) (0xc00065ac80) Stream removed, broadcasting: 3\nI0509 11:26:58.163451 1703 log.go:172] (0xc0008302c0) (0xc00065adc0) Stream removed, broadcasting: 5\n" May 9 11:26:58.167: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 9 11:26:58.167: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 9 11:26:58.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cq426 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 9 11:26:58.413: INFO: stderr: "I0509 11:26:58.308376 1726 log.go:172] (0xc000154840) (0xc0007d0640) Create stream\nI0509 11:26:58.308432 1726 log.go:172] (0xc000154840) (0xc0007d0640) Stream added, broadcasting: 1\nI0509 11:26:58.310924 1726 log.go:172] (0xc000154840) Reply frame received for 1\nI0509 11:26:58.310969 1726 log.go:172] (0xc000154840) (0xc00069ad20) Create stream\nI0509 11:26:58.310981 1726 log.go:172] (0xc000154840) (0xc00069ad20) Stream added, broadcasting: 3\nI0509 11:26:58.312134 1726 log.go:172] (0xc000154840) Reply frame received for 3\nI0509 11:26:58.312187 1726 log.go:172] (0xc000154840) (0xc0005e8000) Create stream\nI0509 11:26:58.312206 1726 log.go:172] (0xc000154840) (0xc0005e8000) Stream added, broadcasting: 5\nI0509 11:26:58.313412 1726 log.go:172] (0xc000154840) Reply frame received for 5\nI0509 11:26:58.408173 1726 log.go:172] (0xc000154840) Data frame received for 3\nI0509 11:26:58.408201 1726 log.go:172] (0xc00069ad20) (3) Data frame handling\nI0509 11:26:58.408217 1726 log.go:172] (0xc00069ad20) (3) Data frame sent\nI0509 11:26:58.408224 1726 log.go:172] (0xc000154840) Data frame received for 3\nI0509 11:26:58.408231 1726 log.go:172] (0xc00069ad20) (3) Data frame handling\nI0509 11:26:58.408604 1726 log.go:172] (0xc000154840) Data frame received for 5\nI0509 11:26:58.408621 1726 log.go:172] (0xc0005e8000) (5) Data frame handling\nI0509 11:26:58.410622 1726 log.go:172] (0xc000154840) Data frame received for 1\nI0509 11:26:58.410639 1726 log.go:172] (0xc0007d0640) (1) Data frame handling\nI0509 11:26:58.410648 1726 log.go:172] (0xc0007d0640) (1) Data frame sent\nI0509 11:26:58.410673 1726 log.go:172] (0xc000154840) (0xc0007d0640) Stream removed, broadcasting: 1\nI0509 11:26:58.410710 1726 log.go:172] (0xc000154840) Go away received\nI0509 11:26:58.410919 1726 log.go:172] (0xc000154840) (0xc0007d0640) Stream removed, broadcasting: 1\nI0509 11:26:58.410938 1726 log.go:172] (0xc000154840) (0xc00069ad20) Stream removed, broadcasting: 3\nI0509 11:26:58.410947 1726 log.go:172] (0xc000154840) (0xc0005e8000) Stream removed, broadcasting: 5\n" May 9 11:26:58.413: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 9 11:26:58.413: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 9 11:26:58.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cq426 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 9 11:26:58.706: INFO: stderr: "I0509 11:26:58.550177 1749 log.go:172] (0xc000154840) (0xc0007b9400) Create stream\nI0509 11:26:58.550240 1749 log.go:172] (0xc000154840) (0xc0007b9400) Stream added, broadcasting: 1\nI0509 11:26:58.552452 1749 log.go:172] (0xc000154840) Reply frame received for 1\nI0509 11:26:58.552504 1749 log.go:172] (0xc000154840) (0xc0007b94a0) Create stream\nI0509 11:26:58.552520 1749 log.go:172] (0xc000154840) (0xc0007b94a0) Stream added, broadcasting: 3\nI0509 11:26:58.553576 1749 log.go:172] (0xc000154840) Reply frame received for 3\nI0509 11:26:58.553614 1749 log.go:172] (0xc000154840) (0xc0007b9540) Create stream\nI0509 11:26:58.553625 1749 log.go:172] (0xc000154840) (0xc0007b9540) Stream added, broadcasting: 5\nI0509 11:26:58.554554 1749 log.go:172] (0xc000154840) Reply frame received for 5\nI0509 11:26:58.699477 1749 log.go:172] (0xc000154840) Data frame received for 3\nI0509 11:26:58.699501 1749 log.go:172] (0xc0007b94a0) (3) Data frame handling\nI0509 11:26:58.699508 1749 log.go:172] (0xc0007b94a0) (3) Data frame sent\nI0509 11:26:58.699512 1749 log.go:172] (0xc000154840) Data frame received for 3\nI0509 11:26:58.699517 1749 log.go:172] (0xc0007b94a0) (3) Data frame handling\nI0509 11:26:58.699780 1749 log.go:172] (0xc000154840) Data frame received for 5\nI0509 11:26:58.699800 1749 log.go:172] (0xc0007b9540) (5) Data frame handling\nI0509 11:26:58.701795 1749 log.go:172] (0xc000154840) Data frame received for 1\nI0509 11:26:58.701815 1749 log.go:172] (0xc0007b9400) (1) Data frame handling\nI0509 11:26:58.701831 1749 log.go:172] (0xc0007b9400) (1) Data frame sent\nI0509 11:26:58.701951 1749 log.go:172] (0xc000154840) (0xc0007b9400) Stream removed, broadcasting: 1\nI0509 11:26:58.702045 1749 log.go:172] (0xc000154840) Go away received\nI0509 11:26:58.702188 1749 log.go:172] (0xc000154840) (0xc0007b9400) Stream removed, broadcasting: 1\nI0509 11:26:58.702219 1749 log.go:172] (0xc000154840) (0xc0007b94a0) Stream removed, broadcasting: 3\nI0509 11:26:58.702240 1749 log.go:172] (0xc000154840) (0xc0007b9540) Stream removed, broadcasting: 5\n" May 9 11:26:58.706: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 9 11:26:58.706: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 9 11:26:58.706: INFO: Waiting for statefulset status.replicas updated to 0 May 9 11:26:58.713: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 9 11:27:08.721: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 9 11:27:08.721: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 9 11:27:08.721: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 9 11:27:08.740: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999161s May 9 11:27:09.745: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988460629s May 9 11:27:10.751: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982948771s May 9 11:27:11.790: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.977328705s May 9 11:27:12.795: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.937694472s May 9 11:27:13.815: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.932668942s May 9 11:27:14.819: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.913593155s May 9 11:27:15.856: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.908900562s May 9 11:27:16.862: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.872075557s May 9 11:27:17.887: INFO: Verifying statefulset ss doesn't scale past 3 for another 866.340179ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-cq426 May 9 11:27:18.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cq426 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 9 11:27:19.132: INFO: stderr: "I0509 11:27:19.032153 1771 log.go:172] (0xc00082c2c0) (0xc00071e640) Create stream\nI0509 11:27:19.032218 1771 log.go:172] (0xc00082c2c0) (0xc00071e640) Stream added, broadcasting: 1\nI0509 11:27:19.034881 1771 log.go:172] (0xc00082c2c0) Reply frame received for 1\nI0509 11:27:19.034930 1771 log.go:172] (0xc00082c2c0) (0xc00071e6e0) Create stream\nI0509 11:27:19.034940 1771 log.go:172] (0xc00082c2c0) (0xc00071e6e0) Stream added, broadcasting: 3\nI0509 11:27:19.035873 1771 log.go:172] (0xc00082c2c0) Reply frame received for 3\nI0509 11:27:19.035929 1771 log.go:172] (0xc00082c2c0) (0xc00071e780) Create stream\nI0509 11:27:19.035944 1771 log.go:172] (0xc00082c2c0) (0xc00071e780) Stream added, broadcasting: 5\nI0509 11:27:19.036714 1771 log.go:172] (0xc00082c2c0) Reply frame received for 5\nI0509 11:27:19.126272 1771 log.go:172] (0xc00082c2c0) Data frame received for 5\nI0509 11:27:19.126314 1771 log.go:172] (0xc00082c2c0) Data frame received for 3\nI0509 11:27:19.126371 1771 log.go:172] (0xc00071e6e0) (3) Data frame handling\nI0509 11:27:19.126396 1771 log.go:172] (0xc00071e6e0) (3) Data frame sent\nI0509 11:27:19.126441 1771 log.go:172] (0xc00071e780) (5) Data frame handling\nI0509 11:27:19.126471 1771 log.go:172] (0xc00082c2c0) Data frame received for 3\nI0509 11:27:19.126480 1771 log.go:172] (0xc00071e6e0) (3) Data frame handling\nI0509 11:27:19.127917 1771 log.go:172] (0xc00082c2c0) Data frame received for 1\nI0509 11:27:19.128010 1771 log.go:172] (0xc00071e640) (1) Data frame handling\nI0509 11:27:19.128045 1771 log.go:172] (0xc00071e640) (1) Data frame sent\nI0509 11:27:19.128070 1771 log.go:172] (0xc00082c2c0) (0xc00071e640) Stream removed, broadcasting: 1\nI0509 11:27:19.128090 1771 log.go:172] (0xc00082c2c0) Go away received\nI0509 11:27:19.128285 1771 log.go:172] (0xc00082c2c0) (0xc00071e640) Stream removed, broadcasting: 1\nI0509 11:27:19.128307 1771 log.go:172] (0xc00082c2c0) (0xc00071e6e0) Stream removed, broadcasting: 3\nI0509 11:27:19.128334 1771 log.go:172] (0xc00082c2c0) (0xc00071e780) Stream removed, broadcasting: 5\n" May 9 11:27:19.133: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 9 11:27:19.133: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 9 11:27:19.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cq426 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 9 11:27:19.323: INFO: stderr: "I0509 11:27:19.252702 1794 log.go:172] (0xc0008322c0) (0xc00073c640) Create stream\nI0509 11:27:19.252762 1794 log.go:172] (0xc0008322c0) (0xc00073c640) Stream added, broadcasting: 1\nI0509 11:27:19.254979 1794 log.go:172] (0xc0008322c0) Reply frame received for 1\nI0509 11:27:19.255056 1794 log.go:172] (0xc0008322c0) (0xc0005ccc80) Create stream\nI0509 11:27:19.255082 1794 log.go:172] (0xc0008322c0) (0xc0005ccc80) Stream added, broadcasting: 3\nI0509 11:27:19.255870 1794 log.go:172] (0xc0008322c0) Reply frame received for 3\nI0509 11:27:19.255903 1794 log.go:172] (0xc0008322c0) (0xc0002cc000) Create stream\nI0509 11:27:19.255911 1794 log.go:172] (0xc0008322c0) (0xc0002cc000) Stream added, broadcasting: 5\nI0509 11:27:19.256756 1794 log.go:172] (0xc0008322c0) Reply frame received for 5\nI0509 11:27:19.317507 1794 log.go:172] (0xc0008322c0) Data frame received for 3\nI0509 11:27:19.317541 1794 log.go:172] (0xc0005ccc80) (3) Data frame handling\nI0509 11:27:19.317561 1794 log.go:172] (0xc0005ccc80) (3) Data frame sent\nI0509 11:27:19.317672 1794 log.go:172] (0xc0008322c0) Data frame received for 3\nI0509 11:27:19.317698 1794 log.go:172] (0xc0005ccc80) (3) Data frame handling\nI0509 11:27:19.317728 1794 log.go:172] (0xc0008322c0) Data frame received for 5\nI0509 11:27:19.317742 1794 log.go:172] (0xc0002cc000) (5) Data frame handling\nI0509 11:27:19.319079 1794 log.go:172] (0xc0008322c0) Data frame received for 1\nI0509 11:27:19.319100 1794 log.go:172] (0xc00073c640) (1) Data frame handling\nI0509 11:27:19.319128 1794 log.go:172] (0xc00073c640) (1) Data frame sent\nI0509 11:27:19.319140 1794 log.go:172] (0xc0008322c0) (0xc00073c640) Stream removed, broadcasting: 1\nI0509 11:27:19.319170 1794 log.go:172] (0xc0008322c0) Go away received\nI0509 11:27:19.319443 1794 log.go:172] (0xc0008322c0) (0xc00073c640) Stream removed, broadcasting: 1\nI0509 11:27:19.319464 1794 log.go:172] (0xc0008322c0) (0xc0005ccc80) Stream removed, broadcasting: 3\nI0509 11:27:19.319478 1794 log.go:172] (0xc0008322c0) (0xc0002cc000) Stream removed, broadcasting: 5\n" May 9 11:27:19.323: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 9 11:27:19.323: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 9 11:27:19.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cq426 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 9 11:27:19.522: INFO: stderr: "I0509 11:27:19.451509 1817 log.go:172] (0xc00014c840) (0xc0007a1360) Create stream\nI0509 11:27:19.451567 1817 log.go:172] (0xc00014c840) (0xc0007a1360) Stream added, broadcasting: 1\nI0509 11:27:19.454383 1817 log.go:172] (0xc00014c840) Reply frame received for 1\nI0509 11:27:19.454434 1817 log.go:172] (0xc00014c840) (0xc0005dc000) Create stream\nI0509 11:27:19.454447 1817 log.go:172] (0xc00014c840) (0xc0005dc000) Stream added, broadcasting: 3\nI0509 11:27:19.455552 1817 log.go:172] (0xc00014c840) Reply frame received for 3\nI0509 11:27:19.455594 1817 log.go:172] (0xc00014c840) (0xc0005dc0a0) Create stream\nI0509 11:27:19.455607 1817 log.go:172] (0xc00014c840) (0xc0005dc0a0) Stream added, broadcasting: 5\nI0509 11:27:19.456553 1817 log.go:172] (0xc00014c840) Reply frame received for 5\nI0509 11:27:19.516780 1817 log.go:172] (0xc00014c840) Data frame received for 3\nI0509 11:27:19.516828 1817 log.go:172] (0xc0005dc000) (3) Data frame handling\nI0509 11:27:19.516857 1817 log.go:172] (0xc00014c840) Data frame received for 5\nI0509 11:27:19.516885 1817 log.go:172] (0xc0005dc0a0) (5) Data frame handling\nI0509 11:27:19.516909 1817 log.go:172] (0xc0005dc000) (3) Data frame sent\nI0509 11:27:19.517097 1817 log.go:172] (0xc00014c840) Data frame received for 3\nI0509 11:27:19.517334 1817 log.go:172] (0xc0005dc000) (3) Data frame handling\nI0509 11:27:19.518390 1817 log.go:172] (0xc00014c840) Data frame received for 1\nI0509 11:27:19.518408 1817 log.go:172] (0xc0007a1360) (1) Data frame handling\nI0509 11:27:19.518418 1817 log.go:172] (0xc0007a1360) (1) Data frame sent\nI0509 11:27:19.518426 1817 log.go:172] (0xc00014c840) (0xc0007a1360) Stream removed, broadcasting: 1\nI0509 11:27:19.518574 1817 log.go:172] (0xc00014c840) Go away received\nI0509 11:27:19.518643 1817 log.go:172] (0xc00014c840) (0xc0007a1360) Stream removed, broadcasting: 1\nI0509 11:27:19.518676 1817 log.go:172] (0xc00014c840) (0xc0005dc000) Stream removed, broadcasting: 3\nI0509 11:27:19.518708 1817 log.go:172] (0xc00014c840) (0xc0005dc0a0) Stream removed, broadcasting: 5\n" May 9 11:27:19.522: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 9 11:27:19.522: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 9 11:27:19.522: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 9 11:27:49.588: INFO: Deleting all statefulset in ns e2e-tests-statefulset-cq426 May 9 11:27:49.591: INFO: Scaling statefulset ss to 0 May 9 11:27:49.598: INFO: Waiting for statefulset status.replicas updated to 0 May 9 11:27:49.601: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:27:49.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-cq426" for this suite. May 9 11:27:55.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:27:55.708: INFO: namespace: e2e-tests-statefulset-cq426, resource: bindings, ignored listing per whitelist May 9 11:27:55.720: INFO: namespace e2e-tests-statefulset-cq426 deletion completed in 6.089760471s • [SLOW TEST:98.538 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:27:55.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 9 11:27:55.835: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 9 11:27:55.849: INFO: Waiting for terminating namespaces to be deleted... May 9 11:27:55.852: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 9 11:27:55.858: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 9 11:27:55.858: INFO: Container kube-proxy ready: true, restart count 0 May 9 11:27:55.858: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 9 11:27:55.858: INFO: Container kindnet-cni ready: true, restart count 0 May 9 11:27:55.858: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 9 11:27:55.858: INFO: Container coredns ready: true, restart count 0 May 9 11:27:55.858: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 9 11:27:55.922: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 9 11:27:55.922: INFO: Container kindnet-cni ready: true, restart count 0 May 9 11:27:55.922: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 9 11:27:55.922: INFO: Container coredns ready: true, restart count 0 May 9 11:27:55.922: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 9 11:27:55.922: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2487e01f-91e8-11ea-a20c-0242ac110018 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-2487e01f-91e8-11ea-a20c-0242ac110018 off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-2487e01f-91e8-11ea-a20c-0242ac110018 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:28:06.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-mjpds" for this suite. May 9 11:28:16.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:28:16.146: INFO: namespace: e2e-tests-sched-pred-mjpds, resource: bindings, ignored listing per whitelist May 9 11:28:16.199: INFO: namespace e2e-tests-sched-pred-mjpds deletion completed in 10.084206471s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:20.479 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:28:16.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 9 11:28:16.304: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:28:22.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-7whs4" for this suite. May 9 11:28:28.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:28:28.660: INFO: namespace: e2e-tests-init-container-7whs4, resource: bindings, ignored listing per whitelist May 9 11:28:28.672: INFO: namespace e2e-tests-init-container-7whs4 deletion completed in 6.085824937s • [SLOW TEST:12.472 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:28:28.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-3491ef6d-91e8-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume secrets May 9 11:28:28.885: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3493cb6f-91e8-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-gbccc" to be "success or failure" May 9 11:28:28.898: INFO: Pod "pod-projected-secrets-3493cb6f-91e8-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.898692ms May 9 11:28:31.100: INFO: Pod "pod-projected-secrets-3493cb6f-91e8-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214416798s May 9 11:28:33.103: INFO: Pod "pod-projected-secrets-3493cb6f-91e8-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.218013697s May 9 11:28:35.107: INFO: Pod "pod-projected-secrets-3493cb6f-91e8-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.221756611s STEP: Saw pod success May 9 11:28:35.107: INFO: Pod "pod-projected-secrets-3493cb6f-91e8-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:28:35.110: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-3493cb6f-91e8-11ea-a20c-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 9 11:28:35.277: INFO: Waiting for pod pod-projected-secrets-3493cb6f-91e8-11ea-a20c-0242ac110018 to disappear May 9 11:28:35.376: INFO: Pod pod-projected-secrets-3493cb6f-91e8-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:28:35.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gbccc" for this suite. May 9 11:28:41.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:28:41.527: INFO: namespace: e2e-tests-projected-gbccc, resource: bindings, ignored listing per whitelist May 9 11:28:41.541: INFO: namespace e2e-tests-projected-gbccc deletion completed in 6.095973999s • [SLOW TEST:12.869 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:28:41.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-qhvm STEP: Creating a pod to test atomic-volume-subpath May 9 11:28:42.402: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-qhvm" in namespace "e2e-tests-subpath-vkrtc" to be "success or failure" May 9 11:28:42.412: INFO: Pod "pod-subpath-test-projected-qhvm": Phase="Pending", Reason="", readiness=false. Elapsed: 9.529253ms May 9 11:28:44.434: INFO: Pod "pod-subpath-test-projected-qhvm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031267077s May 9 11:28:46.438: INFO: Pod "pod-subpath-test-projected-qhvm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036253925s May 9 11:28:49.008: INFO: Pod "pod-subpath-test-projected-qhvm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.605542351s May 9 11:28:51.013: INFO: Pod "pod-subpath-test-projected-qhvm": Phase="Running", Reason="", readiness=true. Elapsed: 8.610558205s May 9 11:28:53.017: INFO: Pod "pod-subpath-test-projected-qhvm": Phase="Running", Reason="", readiness=false. Elapsed: 10.615025989s May 9 11:28:55.021: INFO: Pod "pod-subpath-test-projected-qhvm": Phase="Running", Reason="", readiness=false. Elapsed: 12.6191472s May 9 11:28:57.026: INFO: Pod "pod-subpath-test-projected-qhvm": Phase="Running", Reason="", readiness=false. Elapsed: 14.623324349s May 9 11:28:59.046: INFO: Pod "pod-subpath-test-projected-qhvm": Phase="Running", Reason="", readiness=false. Elapsed: 16.643287092s May 9 11:29:01.050: INFO: Pod "pod-subpath-test-projected-qhvm": Phase="Running", Reason="", readiness=false. Elapsed: 18.647827955s May 9 11:29:03.055: INFO: Pod "pod-subpath-test-projected-qhvm": Phase="Running", Reason="", readiness=false. Elapsed: 20.652354864s May 9 11:29:05.059: INFO: Pod "pod-subpath-test-projected-qhvm": Phase="Running", Reason="", readiness=false. Elapsed: 22.65645886s May 9 11:29:07.063: INFO: Pod "pod-subpath-test-projected-qhvm": Phase="Running", Reason="", readiness=false. Elapsed: 24.661174373s May 9 11:29:09.067: INFO: Pod "pod-subpath-test-projected-qhvm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.664757128s STEP: Saw pod success May 9 11:29:09.067: INFO: Pod "pod-subpath-test-projected-qhvm" satisfied condition "success or failure" May 9 11:29:09.069: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-qhvm container test-container-subpath-projected-qhvm: STEP: delete the pod May 9 11:29:09.112: INFO: Waiting for pod pod-subpath-test-projected-qhvm to disappear May 9 11:29:09.145: INFO: Pod pod-subpath-test-projected-qhvm no longer exists STEP: Deleting pod pod-subpath-test-projected-qhvm May 9 11:29:09.145: INFO: Deleting pod "pod-subpath-test-projected-qhvm" in namespace "e2e-tests-subpath-vkrtc" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:29:09.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-vkrtc" for this suite. May 9 11:29:15.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:29:15.197: INFO: namespace: e2e-tests-subpath-vkrtc, resource: bindings, ignored listing per whitelist May 9 11:29:15.228: INFO: namespace e2e-tests-subpath-vkrtc deletion completed in 6.076235092s • [SLOW TEST:33.686 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:29:15.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-50464ab7-91e8-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume secrets May 9 11:29:15.354: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5046e876-91e8-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-pbf4c" to be "success or failure" May 9 11:29:15.358: INFO: Pod "pod-projected-secrets-5046e876-91e8-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.931043ms May 9 11:29:17.361: INFO: Pod "pod-projected-secrets-5046e876-91e8-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007497006s May 9 11:29:19.366: INFO: Pod "pod-projected-secrets-5046e876-91e8-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011737734s STEP: Saw pod success May 9 11:29:19.366: INFO: Pod "pod-projected-secrets-5046e876-91e8-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:29:19.368: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-5046e876-91e8-11ea-a20c-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 9 11:29:19.432: INFO: Waiting for pod pod-projected-secrets-5046e876-91e8-11ea-a20c-0242ac110018 to disappear May 9 11:29:19.462: INFO: Pod pod-projected-secrets-5046e876-91e8-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:29:19.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pbf4c" for this suite. May 9 11:29:25.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:29:25.526: INFO: namespace: e2e-tests-projected-pbf4c, resource: bindings, ignored listing per whitelist May 9 11:29:25.583: INFO: namespace e2e-tests-projected-pbf4c deletion completed in 6.117689211s • [SLOW TEST:10.355 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:29:25.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 11:29:25.717: INFO: Waiting up to 5m0s for pod "downwardapi-volume-567413ec-91e8-11ea-a20c-0242ac110018" in namespace "e2e-tests-downward-api-s6lv6" to be "success or failure" May 9 11:29:25.729: INFO: Pod "downwardapi-volume-567413ec-91e8-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.477153ms May 9 11:29:27.733: INFO: Pod "downwardapi-volume-567413ec-91e8-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016566954s May 9 11:29:29.738: INFO: Pod "downwardapi-volume-567413ec-91e8-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021099236s STEP: Saw pod success May 9 11:29:29.738: INFO: Pod "downwardapi-volume-567413ec-91e8-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:29:29.741: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-567413ec-91e8-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 11:29:29.779: INFO: Waiting for pod downwardapi-volume-567413ec-91e8-11ea-a20c-0242ac110018 to disappear May 9 11:29:29.793: INFO: Pod downwardapi-volume-567413ec-91e8-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:29:29.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-s6lv6" for this suite. May 9 11:29:35.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:29:35.888: INFO: namespace: e2e-tests-downward-api-s6lv6, resource: bindings, ignored listing per whitelist May 9 11:29:35.892: INFO: namespace e2e-tests-downward-api-s6lv6 deletion completed in 6.094290587s • [SLOW TEST:10.309 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:29:35.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-bs8sr [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 9 11:29:36.101: INFO: Found 0 stateful pods, waiting for 3 May 9 11:29:46.106: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 9 11:29:46.106: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 9 11:29:46.106: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 9 11:29:56.106: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 9 11:29:56.107: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 9 11:29:56.107: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 9 11:29:56.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bs8sr ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 9 11:29:56.378: INFO: stderr: "I0509 11:29:56.261098 1840 log.go:172] (0xc00013a160) (0xc0007165a0) Create stream\nI0509 11:29:56.261401 1840 log.go:172] (0xc00013a160) (0xc0007165a0) Stream added, broadcasting: 1\nI0509 11:29:56.263730 1840 log.go:172] (0xc00013a160) Reply frame received for 1\nI0509 11:29:56.263766 1840 log.go:172] (0xc00013a160) (0xc000716640) Create stream\nI0509 11:29:56.263776 1840 log.go:172] (0xc00013a160) (0xc000716640) Stream added, broadcasting: 3\nI0509 11:29:56.264602 1840 log.go:172] (0xc00013a160) Reply frame received for 3\nI0509 11:29:56.264633 1840 log.go:172] (0xc00013a160) (0xc0007166e0) Create stream\nI0509 11:29:56.264649 1840 log.go:172] (0xc00013a160) (0xc0007166e0) Stream added, broadcasting: 5\nI0509 11:29:56.265717 1840 log.go:172] (0xc00013a160) Reply frame received for 5\nI0509 11:29:56.372059 1840 log.go:172] (0xc00013a160) Data frame received for 3\nI0509 11:29:56.372099 1840 log.go:172] (0xc000716640) (3) Data frame handling\nI0509 11:29:56.372142 1840 log.go:172] (0xc000716640) (3) Data frame sent\nI0509 11:29:56.372159 1840 log.go:172] (0xc00013a160) Data frame received for 3\nI0509 11:29:56.372168 1840 log.go:172] (0xc000716640) (3) Data frame handling\nI0509 11:29:56.372261 1840 log.go:172] (0xc00013a160) Data frame received for 5\nI0509 11:29:56.372290 1840 log.go:172] (0xc0007166e0) (5) Data frame handling\nI0509 11:29:56.374163 1840 log.go:172] (0xc00013a160) Data frame received for 1\nI0509 11:29:56.374182 1840 log.go:172] (0xc0007165a0) (1) Data frame handling\nI0509 11:29:56.374201 1840 log.go:172] (0xc0007165a0) (1) Data frame sent\nI0509 11:29:56.374227 1840 log.go:172] (0xc00013a160) (0xc0007165a0) Stream removed, broadcasting: 1\nI0509 11:29:56.374242 1840 log.go:172] (0xc00013a160) Go away received\nI0509 11:29:56.374492 1840 log.go:172] (0xc00013a160) (0xc0007165a0) Stream removed, broadcasting: 1\nI0509 11:29:56.374525 1840 log.go:172] (0xc00013a160) (0xc000716640) Stream removed, broadcasting: 3\nI0509 11:29:56.374543 1840 log.go:172] (0xc00013a160) (0xc0007166e0) Stream removed, broadcasting: 5\n" May 9 11:29:56.378: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 9 11:29:56.378: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 9 11:30:06.410: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 9 11:30:16.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bs8sr ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 9 11:30:16.777: INFO: stderr: "I0509 11:30:16.676550 1864 log.go:172] (0xc00014c840) (0xc0005bb400) Create stream\nI0509 11:30:16.676604 1864 log.go:172] (0xc00014c840) (0xc0005bb400) Stream added, broadcasting: 1\nI0509 11:30:16.678949 1864 log.go:172] (0xc00014c840) Reply frame received for 1\nI0509 11:30:16.678989 1864 log.go:172] (0xc00014c840) (0xc0005b8000) Create stream\nI0509 11:30:16.679007 1864 log.go:172] (0xc00014c840) (0xc0005b8000) Stream added, broadcasting: 3\nI0509 11:30:16.679863 1864 log.go:172] (0xc00014c840) Reply frame received for 3\nI0509 11:30:16.679891 1864 log.go:172] (0xc00014c840) (0xc0005b80a0) Create stream\nI0509 11:30:16.679900 1864 log.go:172] (0xc00014c840) (0xc0005b80a0) Stream added, broadcasting: 5\nI0509 11:30:16.680857 1864 log.go:172] (0xc00014c840) Reply frame received for 5\nI0509 11:30:16.771228 1864 log.go:172] (0xc00014c840) Data frame received for 3\nI0509 11:30:16.771284 1864 log.go:172] (0xc0005b8000) (3) Data frame handling\nI0509 11:30:16.771301 1864 log.go:172] (0xc0005b8000) (3) Data frame sent\nI0509 11:30:16.771337 1864 log.go:172] (0xc00014c840) Data frame received for 5\nI0509 11:30:16.771385 1864 log.go:172] (0xc0005b80a0) (5) Data frame handling\nI0509 11:30:16.771426 1864 log.go:172] (0xc00014c840) Data frame received for 3\nI0509 11:30:16.771447 1864 log.go:172] (0xc0005b8000) (3) Data frame handling\nI0509 11:30:16.772815 1864 log.go:172] (0xc00014c840) Data frame received for 1\nI0509 11:30:16.772834 1864 log.go:172] (0xc0005bb400) (1) Data frame handling\nI0509 11:30:16.772850 1864 log.go:172] (0xc0005bb400) (1) Data frame sent\nI0509 11:30:16.772862 1864 log.go:172] (0xc00014c840) (0xc0005bb400) Stream removed, broadcasting: 1\nI0509 11:30:16.772876 1864 log.go:172] (0xc00014c840) Go away received\nI0509 11:30:16.773064 1864 log.go:172] (0xc00014c840) (0xc0005bb400) Stream removed, broadcasting: 1\nI0509 11:30:16.773089 1864 log.go:172] (0xc00014c840) (0xc0005b8000) Stream removed, broadcasting: 3\nI0509 11:30:16.773106 1864 log.go:172] (0xc00014c840) (0xc0005b80a0) Stream removed, broadcasting: 5\n" May 9 11:30:16.777: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 9 11:30:16.777: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 9 11:30:36.805: INFO: Waiting for StatefulSet e2e-tests-statefulset-bs8sr/ss2 to complete update May 9 11:30:36.805: INFO: Waiting for Pod e2e-tests-statefulset-bs8sr/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 9 11:30:46.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bs8sr ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 9 11:30:47.155: INFO: stderr: "I0509 11:30:46.950677 1886 log.go:172] (0xc000138790) (0xc0005d52c0) Create stream\nI0509 11:30:46.950732 1886 log.go:172] (0xc000138790) (0xc0005d52c0) Stream added, broadcasting: 1\nI0509 11:30:46.953690 1886 log.go:172] (0xc000138790) Reply frame received for 1\nI0509 11:30:46.953755 1886 log.go:172] (0xc000138790) (0xc000758000) Create stream\nI0509 11:30:46.953777 1886 log.go:172] (0xc000138790) (0xc000758000) Stream added, broadcasting: 3\nI0509 11:30:46.954844 1886 log.go:172] (0xc000138790) Reply frame received for 3\nI0509 11:30:46.954879 1886 log.go:172] (0xc000138790) (0xc000758140) Create stream\nI0509 11:30:46.954888 1886 log.go:172] (0xc000138790) (0xc000758140) Stream added, broadcasting: 5\nI0509 11:30:46.956015 1886 log.go:172] (0xc000138790) Reply frame received for 5\nI0509 11:30:47.150383 1886 log.go:172] (0xc000138790) Data frame received for 3\nI0509 11:30:47.150428 1886 log.go:172] (0xc000758000) (3) Data frame handling\nI0509 11:30:47.150456 1886 log.go:172] (0xc000758000) (3) Data frame sent\nI0509 11:30:47.150522 1886 log.go:172] (0xc000138790) Data frame received for 3\nI0509 11:30:47.150559 1886 log.go:172] (0xc000758000) (3) Data frame handling\nI0509 11:30:47.150580 1886 log.go:172] (0xc000138790) Data frame received for 5\nI0509 11:30:47.150600 1886 log.go:172] (0xc000758140) (5) Data frame handling\nI0509 11:30:47.152318 1886 log.go:172] (0xc000138790) Data frame received for 1\nI0509 11:30:47.152332 1886 log.go:172] (0xc0005d52c0) (1) Data frame handling\nI0509 11:30:47.152345 1886 log.go:172] (0xc0005d52c0) (1) Data frame sent\nI0509 11:30:47.152355 1886 log.go:172] (0xc000138790) (0xc0005d52c0) Stream removed, broadcasting: 1\nI0509 11:30:47.152503 1886 log.go:172] (0xc000138790) Go away received\nI0509 11:30:47.152554 1886 log.go:172] (0xc000138790) (0xc0005d52c0) Stream removed, broadcasting: 1\nI0509 11:30:47.152567 1886 log.go:172] (0xc000138790) (0xc000758000) Stream removed, broadcasting: 3\nI0509 11:30:47.152576 1886 log.go:172] (0xc000138790) (0xc000758140) Stream removed, broadcasting: 5\n" May 9 11:30:47.155: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 9 11:30:47.155: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 9 11:30:57.185: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 9 11:31:07.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bs8sr ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 9 11:31:07.487: INFO: stderr: "I0509 11:31:07.330645 1909 log.go:172] (0xc000138840) (0xc0005dd2c0) Create stream\nI0509 11:31:07.330701 1909 log.go:172] (0xc000138840) (0xc0005dd2c0) Stream added, broadcasting: 1\nI0509 11:31:07.332935 1909 log.go:172] (0xc000138840) Reply frame received for 1\nI0509 11:31:07.332978 1909 log.go:172] (0xc000138840) (0xc0005dd360) Create stream\nI0509 11:31:07.332993 1909 log.go:172] (0xc000138840) (0xc0005dd360) Stream added, broadcasting: 3\nI0509 11:31:07.334063 1909 log.go:172] (0xc000138840) Reply frame received for 3\nI0509 11:31:07.334112 1909 log.go:172] (0xc000138840) (0xc00075a000) Create stream\nI0509 11:31:07.334131 1909 log.go:172] (0xc000138840) (0xc00075a000) Stream added, broadcasting: 5\nI0509 11:31:07.335196 1909 log.go:172] (0xc000138840) Reply frame received for 5\nI0509 11:31:07.483760 1909 log.go:172] (0xc000138840) Data frame received for 5\nI0509 11:31:07.483791 1909 log.go:172] (0xc00075a000) (5) Data frame handling\nI0509 11:31:07.483813 1909 log.go:172] (0xc000138840) Data frame received for 3\nI0509 11:31:07.483822 1909 log.go:172] (0xc0005dd360) (3) Data frame handling\nI0509 11:31:07.483829 1909 log.go:172] (0xc0005dd360) (3) Data frame sent\nI0509 11:31:07.483837 1909 log.go:172] (0xc000138840) Data frame received for 3\nI0509 11:31:07.483845 1909 log.go:172] (0xc0005dd360) (3) Data frame handling\nI0509 11:31:07.484640 1909 log.go:172] (0xc000138840) Data frame received for 1\nI0509 11:31:07.484655 1909 log.go:172] (0xc0005dd2c0) (1) Data frame handling\nI0509 11:31:07.484666 1909 log.go:172] (0xc0005dd2c0) (1) Data frame sent\nI0509 11:31:07.484678 1909 log.go:172] (0xc000138840) (0xc0005dd2c0) Stream removed, broadcasting: 1\nI0509 11:31:07.484717 1909 log.go:172] (0xc000138840) Go away received\nI0509 11:31:07.484817 1909 log.go:172] (0xc000138840) (0xc0005dd2c0) Stream removed, broadcasting: 1\nI0509 11:31:07.484828 1909 log.go:172] (0xc000138840) (0xc0005dd360) Stream removed, broadcasting: 3\nI0509 11:31:07.484835 1909 log.go:172] (0xc000138840) (0xc00075a000) Stream removed, broadcasting: 5\n" May 9 11:31:07.487: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 9 11:31:07.487: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 9 11:31:17.504: INFO: Waiting for StatefulSet e2e-tests-statefulset-bs8sr/ss2 to complete update May 9 11:31:17.504: INFO: Waiting for Pod e2e-tests-statefulset-bs8sr/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 9 11:31:17.504: INFO: Waiting for Pod e2e-tests-statefulset-bs8sr/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 9 11:31:17.504: INFO: Waiting for Pod e2e-tests-statefulset-bs8sr/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 9 11:31:27.510: INFO: Waiting for StatefulSet e2e-tests-statefulset-bs8sr/ss2 to complete update May 9 11:31:27.510: INFO: Waiting for Pod e2e-tests-statefulset-bs8sr/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 9 11:31:27.510: INFO: Waiting for Pod e2e-tests-statefulset-bs8sr/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 9 11:31:37.511: INFO: Waiting for StatefulSet e2e-tests-statefulset-bs8sr/ss2 to complete update May 9 11:31:37.511: INFO: Waiting for Pod e2e-tests-statefulset-bs8sr/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 9 11:31:47.511: INFO: Deleting all statefulset in ns e2e-tests-statefulset-bs8sr May 9 11:31:47.513: INFO: Scaling statefulset ss2 to 0 May 9 11:32:27.549: INFO: Waiting for statefulset status.replicas updated to 0 May 9 11:32:27.551: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:32:27.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-bs8sr" for this suite. May 9 11:32:33.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:32:33.657: INFO: namespace: e2e-tests-statefulset-bs8sr, resource: bindings, ignored listing per whitelist May 9 11:32:33.705: INFO: namespace e2e-tests-statefulset-bs8sr deletion completed in 6.118314017s • [SLOW TEST:177.813 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:32:33.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7wfch STEP: creating a selector STEP: Creating the service pods in kubernetes May 9 11:32:34.000: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 9 11:32:58.091: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.106:8080/dial?request=hostName&protocol=http&host=10.244.2.215&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-7wfch PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:32:58.091: INFO: >>> kubeConfig: /root/.kube/config I0509 11:32:58.128667 6 log.go:172] (0xc000db18c0) (0xc001845680) Create stream I0509 11:32:58.128696 6 log.go:172] (0xc000db18c0) (0xc001845680) Stream added, broadcasting: 1 I0509 11:32:58.131242 6 log.go:172] (0xc000db18c0) Reply frame received for 1 I0509 11:32:58.131285 6 log.go:172] (0xc000db18c0) (0xc0015ab5e0) Create stream I0509 11:32:58.131299 6 log.go:172] (0xc000db18c0) (0xc0015ab5e0) Stream added, broadcasting: 3 I0509 11:32:58.132218 6 log.go:172] (0xc000db18c0) Reply frame received for 3 I0509 11:32:58.132254 6 log.go:172] (0xc000db18c0) (0xc000a77180) Create stream I0509 11:32:58.132267 6 log.go:172] (0xc000db18c0) (0xc000a77180) Stream added, broadcasting: 5 I0509 11:32:58.133368 6 log.go:172] (0xc000db18c0) Reply frame received for 5 I0509 11:32:58.218151 6 log.go:172] (0xc000db18c0) Data frame received for 3 I0509 11:32:58.218191 6 log.go:172] (0xc0015ab5e0) (3) Data frame handling I0509 11:32:58.218219 6 log.go:172] (0xc0015ab5e0) (3) Data frame sent I0509 11:32:58.218964 6 log.go:172] (0xc000db18c0) Data frame received for 3 I0509 11:32:58.219002 6 log.go:172] (0xc0015ab5e0) (3) Data frame handling I0509 11:32:58.219698 6 log.go:172] (0xc000db18c0) Data frame received for 5 I0509 11:32:58.219718 6 log.go:172] (0xc000a77180) (5) Data frame handling I0509 11:32:58.220697 6 log.go:172] (0xc000db18c0) Data frame received for 1 I0509 11:32:58.220722 6 log.go:172] (0xc001845680) (1) Data frame handling I0509 11:32:58.220740 6 log.go:172] (0xc001845680) (1) Data frame sent I0509 11:32:58.220762 6 log.go:172] (0xc000db18c0) (0xc001845680) Stream removed, broadcasting: 1 I0509 11:32:58.220809 6 log.go:172] (0xc000db18c0) Go away received I0509 11:32:58.221007 6 log.go:172] (0xc000db18c0) (0xc001845680) Stream removed, broadcasting: 1 I0509 11:32:58.221024 6 log.go:172] (0xc000db18c0) (0xc0015ab5e0) Stream removed, broadcasting: 3 I0509 11:32:58.221030 6 log.go:172] (0xc000db18c0) (0xc000a77180) Stream removed, broadcasting: 5 May 9 11:32:58.221: INFO: Waiting for endpoints: map[] May 9 11:32:58.224: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.106:8080/dial?request=hostName&protocol=http&host=10.244.1.105&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-7wfch PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:32:58.224: INFO: >>> kubeConfig: /root/.kube/config I0509 11:32:58.259866 6 log.go:172] (0xc00129e2c0) (0xc0015aba40) Create stream I0509 11:32:58.259898 6 log.go:172] (0xc00129e2c0) (0xc0015aba40) Stream added, broadcasting: 1 I0509 11:32:58.262399 6 log.go:172] (0xc00129e2c0) Reply frame received for 1 I0509 11:32:58.262470 6 log.go:172] (0xc00129e2c0) (0xc0015abae0) Create stream I0509 11:32:58.262496 6 log.go:172] (0xc00129e2c0) (0xc0015abae0) Stream added, broadcasting: 3 I0509 11:32:58.263629 6 log.go:172] (0xc00129e2c0) Reply frame received for 3 I0509 11:32:58.263668 6 log.go:172] (0xc00129e2c0) (0xc001cafc20) Create stream I0509 11:32:58.263681 6 log.go:172] (0xc00129e2c0) (0xc001cafc20) Stream added, broadcasting: 5 I0509 11:32:58.264651 6 log.go:172] (0xc00129e2c0) Reply frame received for 5 I0509 11:32:58.337555 6 log.go:172] (0xc00129e2c0) Data frame received for 3 I0509 11:32:58.337586 6 log.go:172] (0xc0015abae0) (3) Data frame handling I0509 11:32:58.337624 6 log.go:172] (0xc0015abae0) (3) Data frame sent I0509 11:32:58.338069 6 log.go:172] (0xc00129e2c0) Data frame received for 3 I0509 11:32:58.338081 6 log.go:172] (0xc0015abae0) (3) Data frame handling I0509 11:32:58.338459 6 log.go:172] (0xc00129e2c0) Data frame received for 5 I0509 11:32:58.338502 6 log.go:172] (0xc001cafc20) (5) Data frame handling I0509 11:32:58.340108 6 log.go:172] (0xc00129e2c0) Data frame received for 1 I0509 11:32:58.340124 6 log.go:172] (0xc0015aba40) (1) Data frame handling I0509 11:32:58.340133 6 log.go:172] (0xc0015aba40) (1) Data frame sent I0509 11:32:58.340144 6 log.go:172] (0xc00129e2c0) (0xc0015aba40) Stream removed, broadcasting: 1 I0509 11:32:58.340185 6 log.go:172] (0xc00129e2c0) Go away received I0509 11:32:58.340236 6 log.go:172] (0xc00129e2c0) (0xc0015aba40) Stream removed, broadcasting: 1 I0509 11:32:58.340288 6 log.go:172] (0xc00129e2c0) (0xc0015abae0) Stream removed, broadcasting: 3 I0509 11:32:58.340304 6 log.go:172] (0xc00129e2c0) (0xc001cafc20) Stream removed, broadcasting: 5 May 9 11:32:58.340: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:32:58.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-7wfch" for this suite. May 9 11:33:18.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:33:18.427: INFO: namespace: e2e-tests-pod-network-test-7wfch, resource: bindings, ignored listing per whitelist May 9 11:33:18.443: INFO: namespace e2e-tests-pod-network-test-7wfch deletion completed in 20.099138082s • [SLOW TEST:44.738 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:33:18.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 11:33:18.623: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 9 11:33:18.640: INFO: Number of nodes with available pods: 0 May 9 11:33:18.640: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 9 11:33:18.753: INFO: Number of nodes with available pods: 0 May 9 11:33:18.753: INFO: Node hunter-worker is running more than one daemon pod May 9 11:33:19.758: INFO: Number of nodes with available pods: 0 May 9 11:33:19.758: INFO: Node hunter-worker is running more than one daemon pod May 9 11:33:20.758: INFO: Number of nodes with available pods: 0 May 9 11:33:20.758: INFO: Node hunter-worker is running more than one daemon pod May 9 11:33:21.795: INFO: Number of nodes with available pods: 0 May 9 11:33:21.795: INFO: Node hunter-worker is running more than one daemon pod May 9 11:33:22.758: INFO: Number of nodes with available pods: 1 May 9 11:33:22.758: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 9 11:33:22.803: INFO: Number of nodes with available pods: 1 May 9 11:33:22.803: INFO: Number of running nodes: 0, number of available pods: 1 May 9 11:33:23.808: INFO: Number of nodes with available pods: 0 May 9 11:33:23.808: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 9 11:33:23.821: INFO: Number of nodes with available pods: 0 May 9 11:33:23.821: INFO: Node hunter-worker is running more than one daemon pod May 9 11:33:24.824: INFO: Number of nodes with available pods: 0 May 9 11:33:24.824: INFO: Node hunter-worker is running more than one daemon pod May 9 11:33:25.826: INFO: Number of nodes with available pods: 0 May 9 11:33:25.826: INFO: Node hunter-worker is running more than one daemon pod May 9 11:33:26.826: INFO: Number of nodes with available pods: 0 May 9 11:33:26.826: INFO: Node hunter-worker is running more than one daemon pod May 9 11:33:27.825: INFO: Number of nodes with available pods: 0 May 9 11:33:27.825: INFO: Node hunter-worker is running more than one daemon pod May 9 11:33:28.825: INFO: Number of nodes with available pods: 0 May 9 11:33:28.825: INFO: Node hunter-worker is running more than one daemon pod May 9 11:33:29.825: INFO: Number of nodes with available pods: 0 May 9 11:33:29.825: INFO: Node hunter-worker is running more than one daemon pod May 9 11:33:30.826: INFO: Number of nodes with available pods: 0 May 9 11:33:30.826: INFO: Node hunter-worker is running more than one daemon pod May 9 11:33:31.825: INFO: Number of nodes with available pods: 0 May 9 11:33:31.825: INFO: Node hunter-worker is running more than one daemon pod May 9 11:33:32.825: INFO: Number of nodes with available pods: 0 May 9 11:33:32.825: INFO: Node hunter-worker is running more than one daemon pod May 9 11:33:33.826: INFO: Number of nodes with available pods: 0 May 9 11:33:33.826: INFO: Node hunter-worker is running more than one daemon pod May 9 11:33:34.825: INFO: Number of nodes with available pods: 1 May 9 11:33:34.825: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-mkcxh, will wait for the garbage collector to delete the pods May 9 11:33:34.891: INFO: Deleting DaemonSet.extensions daemon-set took: 6.193528ms May 9 11:33:34.991: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.224825ms May 9 11:33:41.394: INFO: Number of nodes with available pods: 0 May 9 11:33:41.394: INFO: Number of running nodes: 0, number of available pods: 0 May 9 11:33:41.396: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-mkcxh/daemonsets","resourceVersion":"9582197"},"items":null} May 9 11:33:41.399: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-mkcxh/pods","resourceVersion":"9582197"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:33:41.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-mkcxh" for this suite. May 9 11:33:47.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:33:47.551: INFO: namespace: e2e-tests-daemonsets-mkcxh, resource: bindings, ignored listing per whitelist May 9 11:33:47.581: INFO: namespace e2e-tests-daemonsets-mkcxh deletion completed in 6.098636688s • [SLOW TEST:29.138 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:33:47.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-xhd79 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-xhd79 to expose endpoints map[] May 9 11:33:47.719: INFO: Get endpoints failed (12.640674ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 9 11:33:48.724: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-xhd79 exposes endpoints map[] (1.0168086s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-xhd79 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-xhd79 to expose endpoints map[pod1:[80]] May 9 11:33:51.780: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-xhd79 exposes endpoints map[pod1:[80]] (3.049834446s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-xhd79 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-xhd79 to expose endpoints map[pod1:[80] pod2:[80]] May 9 11:33:55.906: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-xhd79 exposes endpoints map[pod1:[80] pod2:[80]] (4.122428451s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-xhd79 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-xhd79 to expose endpoints map[pod2:[80]] May 9 11:33:56.936: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-xhd79 exposes endpoints map[pod2:[80]] (1.025406003s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-xhd79 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-xhd79 to expose endpoints map[] May 9 11:33:57.946: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-xhd79 exposes endpoints map[] (1.006438611s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:33:57.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-xhd79" for this suite. May 9 11:34:04.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:34:04.088: INFO: namespace: e2e-tests-services-xhd79, resource: bindings, ignored listing per whitelist May 9 11:34:04.134: INFO: namespace e2e-tests-services-xhd79 deletion completed in 6.096578881s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:16.552 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:34:04.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 11:34:04.283: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc7b2926-91e8-11ea-a20c-0242ac110018" in namespace "e2e-tests-downward-api-cvjfj" to be "success or failure" May 9 11:34:04.308: INFO: Pod "downwardapi-volume-fc7b2926-91e8-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.427677ms May 9 11:34:06.312: INFO: Pod "downwardapi-volume-fc7b2926-91e8-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029144046s May 9 11:34:08.317: INFO: Pod "downwardapi-volume-fc7b2926-91e8-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033810044s STEP: Saw pod success May 9 11:34:08.317: INFO: Pod "downwardapi-volume-fc7b2926-91e8-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:34:08.320: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-fc7b2926-91e8-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 11:34:08.338: INFO: Waiting for pod downwardapi-volume-fc7b2926-91e8-11ea-a20c-0242ac110018 to disappear May 9 11:34:08.343: INFO: Pod downwardapi-volume-fc7b2926-91e8-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:34:08.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cvjfj" for this suite. May 9 11:34:14.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:34:14.417: INFO: namespace: e2e-tests-downward-api-cvjfj, resource: bindings, ignored listing per whitelist May 9 11:34:14.444: INFO: namespace e2e-tests-downward-api-cvjfj deletion completed in 6.098413463s • [SLOW TEST:10.310 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:34:14.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 9 11:34:24.610: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rccqc PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:34:24.610: INFO: >>> kubeConfig: /root/.kube/config I0509 11:34:24.645012 6 log.go:172] (0xc00195a420) (0xc001419900) Create stream I0509 11:34:24.645035 6 log.go:172] (0xc00195a420) (0xc001419900) Stream added, broadcasting: 1 I0509 11:34:24.646821 6 log.go:172] (0xc00195a420) Reply frame received for 1 I0509 11:34:24.646872 6 log.go:172] (0xc00195a420) (0xc001c57cc0) Create stream I0509 11:34:24.646893 6 log.go:172] (0xc00195a420) (0xc001c57cc0) Stream added, broadcasting: 3 I0509 11:34:24.648160 6 log.go:172] (0xc00195a420) Reply frame received for 3 I0509 11:34:24.648219 6 log.go:172] (0xc00195a420) (0xc00263ec80) Create stream I0509 11:34:24.648245 6 log.go:172] (0xc00195a420) (0xc00263ec80) Stream added, broadcasting: 5 I0509 11:34:24.649353 6 log.go:172] (0xc00195a420) Reply frame received for 5 I0509 11:34:24.754966 6 log.go:172] (0xc00195a420) Data frame received for 3 I0509 11:34:24.755026 6 log.go:172] (0xc001c57cc0) (3) Data frame handling I0509 11:34:24.755053 6 log.go:172] (0xc001c57cc0) (3) Data frame sent I0509 11:34:24.755072 6 log.go:172] (0xc00195a420) Data frame received for 3 I0509 11:34:24.755137 6 log.go:172] (0xc001c57cc0) (3) Data frame handling I0509 11:34:24.755179 6 log.go:172] (0xc00195a420) Data frame received for 5 I0509 11:34:24.755219 6 log.go:172] (0xc00263ec80) (5) Data frame handling I0509 11:34:24.756672 6 log.go:172] (0xc00195a420) Data frame received for 1 I0509 11:34:24.756735 6 log.go:172] (0xc001419900) (1) Data frame handling I0509 11:34:24.756807 6 log.go:172] (0xc001419900) (1) Data frame sent I0509 11:34:24.756841 6 log.go:172] (0xc00195a420) (0xc001419900) Stream removed, broadcasting: 1 I0509 11:34:24.756866 6 log.go:172] (0xc00195a420) Go away received I0509 11:34:24.757008 6 log.go:172] (0xc00195a420) (0xc001419900) Stream removed, broadcasting: 1 I0509 11:34:24.757043 6 log.go:172] (0xc00195a420) (0xc001c57cc0) Stream removed, broadcasting: 3 I0509 11:34:24.757073 6 log.go:172] (0xc00195a420) (0xc00263ec80) Stream removed, broadcasting: 5 May 9 11:34:24.757: INFO: Exec stderr: "" May 9 11:34:24.757: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rccqc PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:34:24.757: INFO: >>> kubeConfig: /root/.kube/config I0509 11:34:24.801016 6 log.go:172] (0xc000db13f0) (0xc001db8140) Create stream I0509 11:34:24.801043 6 log.go:172] (0xc000db13f0) (0xc001db8140) Stream added, broadcasting: 1 I0509 11:34:24.802764 6 log.go:172] (0xc000db13f0) Reply frame received for 1 I0509 11:34:24.802807 6 log.go:172] (0xc000db13f0) (0xc002546000) Create stream I0509 11:34:24.802822 6 log.go:172] (0xc000db13f0) (0xc002546000) Stream added, broadcasting: 3 I0509 11:34:24.803738 6 log.go:172] (0xc000db13f0) Reply frame received for 3 I0509 11:34:24.803802 6 log.go:172] (0xc000db13f0) (0xc0025460a0) Create stream I0509 11:34:24.803822 6 log.go:172] (0xc000db13f0) (0xc0025460a0) Stream added, broadcasting: 5 I0509 11:34:24.804601 6 log.go:172] (0xc000db13f0) Reply frame received for 5 I0509 11:34:24.916130 6 log.go:172] (0xc000db13f0) Data frame received for 5 I0509 11:34:24.916159 6 log.go:172] (0xc0025460a0) (5) Data frame handling I0509 11:34:24.916184 6 log.go:172] (0xc000db13f0) Data frame received for 3 I0509 11:34:24.916193 6 log.go:172] (0xc002546000) (3) Data frame handling I0509 11:34:24.916202 6 log.go:172] (0xc002546000) (3) Data frame sent I0509 11:34:24.916211 6 log.go:172] (0xc000db13f0) Data frame received for 3 I0509 11:34:24.916218 6 log.go:172] (0xc002546000) (3) Data frame handling I0509 11:34:24.917311 6 log.go:172] (0xc000db13f0) Data frame received for 1 I0509 11:34:24.917342 6 log.go:172] (0xc001db8140) (1) Data frame handling I0509 11:34:24.917359 6 log.go:172] (0xc001db8140) (1) Data frame sent I0509 11:34:24.917387 6 log.go:172] (0xc000db13f0) (0xc001db8140) Stream removed, broadcasting: 1 I0509 11:34:24.917408 6 log.go:172] (0xc000db13f0) Go away received I0509 11:34:24.917493 6 log.go:172] (0xc000db13f0) (0xc001db8140) Stream removed, broadcasting: 1 I0509 11:34:24.917510 6 log.go:172] (0xc000db13f0) (0xc002546000) Stream removed, broadcasting: 3 I0509 11:34:24.917523 6 log.go:172] (0xc000db13f0) (0xc0025460a0) Stream removed, broadcasting: 5 May 9 11:34:24.917: INFO: Exec stderr: "" May 9 11:34:24.917: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rccqc PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:34:24.917: INFO: >>> kubeConfig: /root/.kube/config I0509 11:34:24.941471 6 log.go:172] (0xc0000ebce0) (0xc002546320) Create stream I0509 11:34:24.941489 6 log.go:172] (0xc0000ebce0) (0xc002546320) Stream added, broadcasting: 1 I0509 11:34:24.942813 6 log.go:172] (0xc0000ebce0) Reply frame received for 1 I0509 11:34:24.942849 6 log.go:172] (0xc0000ebce0) (0xc002546460) Create stream I0509 11:34:24.942860 6 log.go:172] (0xc0000ebce0) (0xc002546460) Stream added, broadcasting: 3 I0509 11:34:24.945452 6 log.go:172] (0xc0000ebce0) Reply frame received for 3 I0509 11:34:24.945487 6 log.go:172] (0xc0000ebce0) (0xc001db81e0) Create stream I0509 11:34:24.945501 6 log.go:172] (0xc0000ebce0) (0xc001db81e0) Stream added, broadcasting: 5 I0509 11:34:24.946344 6 log.go:172] (0xc0000ebce0) Reply frame received for 5 I0509 11:34:24.995345 6 log.go:172] (0xc0000ebce0) Data frame received for 3 I0509 11:34:24.995392 6 log.go:172] (0xc0000ebce0) Data frame received for 5 I0509 11:34:24.995444 6 log.go:172] (0xc001db81e0) (5) Data frame handling I0509 11:34:24.995473 6 log.go:172] (0xc002546460) (3) Data frame handling I0509 11:34:24.995488 6 log.go:172] (0xc002546460) (3) Data frame sent I0509 11:34:24.995507 6 log.go:172] (0xc0000ebce0) Data frame received for 3 I0509 11:34:24.995530 6 log.go:172] (0xc002546460) (3) Data frame handling I0509 11:34:24.996386 6 log.go:172] (0xc0000ebce0) Data frame received for 1 I0509 11:34:24.996400 6 log.go:172] (0xc002546320) (1) Data frame handling I0509 11:34:24.996409 6 log.go:172] (0xc002546320) (1) Data frame sent I0509 11:34:24.996415 6 log.go:172] (0xc0000ebce0) (0xc002546320) Stream removed, broadcasting: 1 I0509 11:34:24.996477 6 log.go:172] (0xc0000ebce0) (0xc002546320) Stream removed, broadcasting: 1 I0509 11:34:24.996488 6 log.go:172] (0xc0000ebce0) (0xc002546460) Stream removed, broadcasting: 3 I0509 11:34:24.996492 6 log.go:172] (0xc0000ebce0) (0xc001db81e0) Stream removed, broadcasting: 5 May 9 11:34:24.996: INFO: Exec stderr: "" May 9 11:34:24.996: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rccqc PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:34:24.996: INFO: >>> kubeConfig: /root/.kube/config I0509 11:34:24.996774 6 log.go:172] (0xc0000ebce0) Go away received I0509 11:34:25.025681 6 log.go:172] (0xc002770210) (0xc0025466e0) Create stream I0509 11:34:25.025718 6 log.go:172] (0xc002770210) (0xc0025466e0) Stream added, broadcasting: 1 I0509 11:34:25.028208 6 log.go:172] (0xc002770210) Reply frame received for 1 I0509 11:34:25.028247 6 log.go:172] (0xc002770210) (0xc002546820) Create stream I0509 11:34:25.028262 6 log.go:172] (0xc002770210) (0xc002546820) Stream added, broadcasting: 3 I0509 11:34:25.029343 6 log.go:172] (0xc002770210) Reply frame received for 3 I0509 11:34:25.029376 6 log.go:172] (0xc002770210) (0xc001db8280) Create stream I0509 11:34:25.029389 6 log.go:172] (0xc002770210) (0xc001db8280) Stream added, broadcasting: 5 I0509 11:34:25.030414 6 log.go:172] (0xc002770210) Reply frame received for 5 I0509 11:34:25.087425 6 log.go:172] (0xc002770210) Data frame received for 5 I0509 11:34:25.087469 6 log.go:172] (0xc001db8280) (5) Data frame handling I0509 11:34:25.087503 6 log.go:172] (0xc002770210) Data frame received for 3 I0509 11:34:25.087517 6 log.go:172] (0xc002546820) (3) Data frame handling I0509 11:34:25.087545 6 log.go:172] (0xc002546820) (3) Data frame sent I0509 11:34:25.087566 6 log.go:172] (0xc002770210) Data frame received for 3 I0509 11:34:25.087583 6 log.go:172] (0xc002546820) (3) Data frame handling I0509 11:34:25.089468 6 log.go:172] (0xc002770210) Data frame received for 1 I0509 11:34:25.089506 6 log.go:172] (0xc0025466e0) (1) Data frame handling I0509 11:34:25.089533 6 log.go:172] (0xc0025466e0) (1) Data frame sent I0509 11:34:25.089566 6 log.go:172] (0xc002770210) (0xc0025466e0) Stream removed, broadcasting: 1 I0509 11:34:25.089661 6 log.go:172] (0xc002770210) Go away received I0509 11:34:25.089715 6 log.go:172] (0xc002770210) (0xc0025466e0) Stream removed, broadcasting: 1 I0509 11:34:25.089756 6 log.go:172] (0xc002770210) (0xc002546820) Stream removed, broadcasting: 3 I0509 11:34:25.089773 6 log.go:172] (0xc002770210) (0xc001db8280) Stream removed, broadcasting: 5 May 9 11:34:25.089: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 9 11:34:25.089: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rccqc PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:34:25.089: INFO: >>> kubeConfig: /root/.kube/config I0509 11:34:25.122537 6 log.go:172] (0xc0027706e0) (0xc002546a00) Create stream I0509 11:34:25.122577 6 log.go:172] (0xc0027706e0) (0xc002546a00) Stream added, broadcasting: 1 I0509 11:34:25.124338 6 log.go:172] (0xc0027706e0) Reply frame received for 1 I0509 11:34:25.124383 6 log.go:172] (0xc0027706e0) (0xc002546aa0) Create stream I0509 11:34:25.124398 6 log.go:172] (0xc0027706e0) (0xc002546aa0) Stream added, broadcasting: 3 I0509 11:34:25.125685 6 log.go:172] (0xc0027706e0) Reply frame received for 3 I0509 11:34:25.125734 6 log.go:172] (0xc0027706e0) (0xc00231a000) Create stream I0509 11:34:25.125750 6 log.go:172] (0xc0027706e0) (0xc00231a000) Stream added, broadcasting: 5 I0509 11:34:25.126896 6 log.go:172] (0xc0027706e0) Reply frame received for 5 I0509 11:34:25.192231 6 log.go:172] (0xc0027706e0) Data frame received for 3 I0509 11:34:25.192257 6 log.go:172] (0xc002546aa0) (3) Data frame handling I0509 11:34:25.192274 6 log.go:172] (0xc002546aa0) (3) Data frame sent I0509 11:34:25.192282 6 log.go:172] (0xc0027706e0) Data frame received for 3 I0509 11:34:25.192288 6 log.go:172] (0xc002546aa0) (3) Data frame handling I0509 11:34:25.192429 6 log.go:172] (0xc0027706e0) Data frame received for 5 I0509 11:34:25.192452 6 log.go:172] (0xc00231a000) (5) Data frame handling I0509 11:34:25.194187 6 log.go:172] (0xc0027706e0) Data frame received for 1 I0509 11:34:25.194211 6 log.go:172] (0xc002546a00) (1) Data frame handling I0509 11:34:25.194232 6 log.go:172] (0xc002546a00) (1) Data frame sent I0509 11:34:25.194253 6 log.go:172] (0xc0027706e0) (0xc002546a00) Stream removed, broadcasting: 1 I0509 11:34:25.194361 6 log.go:172] (0xc0027706e0) (0xc002546a00) Stream removed, broadcasting: 1 I0509 11:34:25.194386 6 log.go:172] (0xc0027706e0) (0xc002546aa0) Stream removed, broadcasting: 3 I0509 11:34:25.194548 6 log.go:172] (0xc0027706e0) Go away received I0509 11:34:25.194603 6 log.go:172] (0xc0027706e0) (0xc00231a000) Stream removed, broadcasting: 5 May 9 11:34:25.194: INFO: Exec stderr: "" May 9 11:34:25.194: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rccqc PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:34:25.194: INFO: >>> kubeConfig: /root/.kube/config I0509 11:34:25.228393 6 log.go:172] (0xc000db1ce0) (0xc001db85a0) Create stream I0509 11:34:25.228434 6 log.go:172] (0xc000db1ce0) (0xc001db85a0) Stream added, broadcasting: 1 I0509 11:34:25.229982 6 log.go:172] (0xc000db1ce0) Reply frame received for 1 I0509 11:34:25.230024 6 log.go:172] (0xc000db1ce0) (0xc001db8640) Create stream I0509 11:34:25.230038 6 log.go:172] (0xc000db1ce0) (0xc001db8640) Stream added, broadcasting: 3 I0509 11:34:25.230783 6 log.go:172] (0xc000db1ce0) Reply frame received for 3 I0509 11:34:25.230833 6 log.go:172] (0xc000db1ce0) (0xc001884000) Create stream I0509 11:34:25.230851 6 log.go:172] (0xc000db1ce0) (0xc001884000) Stream added, broadcasting: 5 I0509 11:34:25.231624 6 log.go:172] (0xc000db1ce0) Reply frame received for 5 I0509 11:34:25.294946 6 log.go:172] (0xc000db1ce0) Data frame received for 3 I0509 11:34:25.294981 6 log.go:172] (0xc001db8640) (3) Data frame handling I0509 11:34:25.295019 6 log.go:172] (0xc000db1ce0) Data frame received for 5 I0509 11:34:25.295067 6 log.go:172] (0xc001884000) (5) Data frame handling I0509 11:34:25.295098 6 log.go:172] (0xc001db8640) (3) Data frame sent I0509 11:34:25.295117 6 log.go:172] (0xc000db1ce0) Data frame received for 3 I0509 11:34:25.295152 6 log.go:172] (0xc001db8640) (3) Data frame handling I0509 11:34:25.296141 6 log.go:172] (0xc000db1ce0) Data frame received for 1 I0509 11:34:25.296172 6 log.go:172] (0xc001db85a0) (1) Data frame handling I0509 11:34:25.296207 6 log.go:172] (0xc001db85a0) (1) Data frame sent I0509 11:34:25.296236 6 log.go:172] (0xc000db1ce0) (0xc001db85a0) Stream removed, broadcasting: 1 I0509 11:34:25.296265 6 log.go:172] (0xc000db1ce0) Go away received I0509 11:34:25.296378 6 log.go:172] (0xc000db1ce0) (0xc001db85a0) Stream removed, broadcasting: 1 I0509 11:34:25.296401 6 log.go:172] (0xc000db1ce0) (0xc001db8640) Stream removed, broadcasting: 3 I0509 11:34:25.296416 6 log.go:172] (0xc000db1ce0) (0xc001884000) Stream removed, broadcasting: 5 May 9 11:34:25.296: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 9 11:34:25.296: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rccqc PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:34:25.296: INFO: >>> kubeConfig: /root/.kube/config I0509 11:34:25.322166 6 log.go:172] (0xc0017202c0) (0xc0018843c0) Create stream I0509 11:34:25.322189 6 log.go:172] (0xc0017202c0) (0xc0018843c0) Stream added, broadcasting: 1 I0509 11:34:25.324352 6 log.go:172] (0xc0017202c0) Reply frame received for 1 I0509 11:34:25.324385 6 log.go:172] (0xc0017202c0) (0xc001884460) Create stream I0509 11:34:25.324397 6 log.go:172] (0xc0017202c0) (0xc001884460) Stream added, broadcasting: 3 I0509 11:34:25.325274 6 log.go:172] (0xc0017202c0) Reply frame received for 3 I0509 11:34:25.325299 6 log.go:172] (0xc0017202c0) (0xc001884500) Create stream I0509 11:34:25.325310 6 log.go:172] (0xc0017202c0) (0xc001884500) Stream added, broadcasting: 5 I0509 11:34:25.326194 6 log.go:172] (0xc0017202c0) Reply frame received for 5 I0509 11:34:25.392512 6 log.go:172] (0xc0017202c0) Data frame received for 5 I0509 11:34:25.392556 6 log.go:172] (0xc001884500) (5) Data frame handling I0509 11:34:25.392587 6 log.go:172] (0xc0017202c0) Data frame received for 3 I0509 11:34:25.392606 6 log.go:172] (0xc001884460) (3) Data frame handling I0509 11:34:25.392618 6 log.go:172] (0xc001884460) (3) Data frame sent I0509 11:34:25.392627 6 log.go:172] (0xc0017202c0) Data frame received for 3 I0509 11:34:25.392635 6 log.go:172] (0xc001884460) (3) Data frame handling I0509 11:34:25.394079 6 log.go:172] (0xc0017202c0) Data frame received for 1 I0509 11:34:25.394100 6 log.go:172] (0xc0018843c0) (1) Data frame handling I0509 11:34:25.394124 6 log.go:172] (0xc0018843c0) (1) Data frame sent I0509 11:34:25.394150 6 log.go:172] (0xc0017202c0) (0xc0018843c0) Stream removed, broadcasting: 1 I0509 11:34:25.394211 6 log.go:172] (0xc0017202c0) Go away received I0509 11:34:25.394275 6 log.go:172] (0xc0017202c0) (0xc0018843c0) Stream removed, broadcasting: 1 I0509 11:34:25.394302 6 log.go:172] (0xc0017202c0) (0xc001884460) Stream removed, broadcasting: 3 I0509 11:34:25.394319 6 log.go:172] (0xc0017202c0) (0xc001884500) Stream removed, broadcasting: 5 May 9 11:34:25.394: INFO: Exec stderr: "" May 9 11:34:25.394: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rccqc PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:34:25.394: INFO: >>> kubeConfig: /root/.kube/config I0509 11:34:25.423311 6 log.go:172] (0xc001aa02c0) (0xc0025943c0) Create stream I0509 11:34:25.423344 6 log.go:172] (0xc001aa02c0) (0xc0025943c0) Stream added, broadcasting: 1 I0509 11:34:25.426634 6 log.go:172] (0xc001aa02c0) Reply frame received for 1 I0509 11:34:25.426694 6 log.go:172] (0xc001aa02c0) (0xc001db86e0) Create stream I0509 11:34:25.426723 6 log.go:172] (0xc001aa02c0) (0xc001db86e0) Stream added, broadcasting: 3 I0509 11:34:25.428709 6 log.go:172] (0xc001aa02c0) Reply frame received for 3 I0509 11:34:25.428760 6 log.go:172] (0xc001aa02c0) (0xc001db8780) Create stream I0509 11:34:25.428781 6 log.go:172] (0xc001aa02c0) (0xc001db8780) Stream added, broadcasting: 5 I0509 11:34:25.432259 6 log.go:172] (0xc001aa02c0) Reply frame received for 5 I0509 11:34:25.503310 6 log.go:172] (0xc001aa02c0) Data frame received for 5 I0509 11:34:25.503370 6 log.go:172] (0xc001db8780) (5) Data frame handling I0509 11:34:25.503412 6 log.go:172] (0xc001aa02c0) Data frame received for 3 I0509 11:34:25.503434 6 log.go:172] (0xc001db86e0) (3) Data frame handling I0509 11:34:25.503459 6 log.go:172] (0xc001db86e0) (3) Data frame sent I0509 11:34:25.503474 6 log.go:172] (0xc001aa02c0) Data frame received for 3 I0509 11:34:25.503485 6 log.go:172] (0xc001db86e0) (3) Data frame handling I0509 11:34:25.504859 6 log.go:172] (0xc001aa02c0) Data frame received for 1 I0509 11:34:25.504911 6 log.go:172] (0xc0025943c0) (1) Data frame handling I0509 11:34:25.504930 6 log.go:172] (0xc0025943c0) (1) Data frame sent I0509 11:34:25.504969 6 log.go:172] (0xc001aa02c0) (0xc0025943c0) Stream removed, broadcasting: 1 I0509 11:34:25.505039 6 log.go:172] (0xc001aa02c0) Go away received I0509 11:34:25.505544 6 log.go:172] (0xc001aa02c0) (0xc0025943c0) Stream removed, broadcasting: 1 I0509 11:34:25.505580 6 log.go:172] (0xc001aa02c0) (0xc001db86e0) Stream removed, broadcasting: 3 I0509 11:34:25.505596 6 log.go:172] (0xc001aa02c0) (0xc001db8780) Stream removed, broadcasting: 5 May 9 11:34:25.505: INFO: Exec stderr: "" May 9 11:34:25.505: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rccqc PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:34:25.505: INFO: >>> kubeConfig: /root/.kube/config I0509 11:34:25.535117 6 log.go:172] (0xc001720790) (0xc001884820) Create stream I0509 11:34:25.535151 6 log.go:172] (0xc001720790) (0xc001884820) Stream added, broadcasting: 1 I0509 11:34:25.536937 6 log.go:172] (0xc001720790) Reply frame received for 1 I0509 11:34:25.536984 6 log.go:172] (0xc001720790) (0xc001db8820) Create stream I0509 11:34:25.537001 6 log.go:172] (0xc001720790) (0xc001db8820) Stream added, broadcasting: 3 I0509 11:34:25.538256 6 log.go:172] (0xc001720790) Reply frame received for 3 I0509 11:34:25.538316 6 log.go:172] (0xc001720790) (0xc001db8960) Create stream I0509 11:34:25.538344 6 log.go:172] (0xc001720790) (0xc001db8960) Stream added, broadcasting: 5 I0509 11:34:25.539319 6 log.go:172] (0xc001720790) Reply frame received for 5 I0509 11:34:25.608634 6 log.go:172] (0xc001720790) Data frame received for 3 I0509 11:34:25.608710 6 log.go:172] (0xc001db8820) (3) Data frame handling I0509 11:34:25.608734 6 log.go:172] (0xc001db8820) (3) Data frame sent I0509 11:34:25.608749 6 log.go:172] (0xc001720790) Data frame received for 3 I0509 11:34:25.608770 6 log.go:172] (0xc001db8820) (3) Data frame handling I0509 11:34:25.608807 6 log.go:172] (0xc001720790) Data frame received for 5 I0509 11:34:25.608839 6 log.go:172] (0xc001db8960) (5) Data frame handling I0509 11:34:25.610442 6 log.go:172] (0xc001720790) Data frame received for 1 I0509 11:34:25.610468 6 log.go:172] (0xc001884820) (1) Data frame handling I0509 11:34:25.610493 6 log.go:172] (0xc001884820) (1) Data frame sent I0509 11:34:25.610523 6 log.go:172] (0xc001720790) (0xc001884820) Stream removed, broadcasting: 1 I0509 11:34:25.610553 6 log.go:172] (0xc001720790) Go away received I0509 11:34:25.610722 6 log.go:172] (0xc001720790) (0xc001884820) Stream removed, broadcasting: 1 I0509 11:34:25.610757 6 log.go:172] (0xc001720790) (0xc001db8820) Stream removed, broadcasting: 3 I0509 11:34:25.610784 6 log.go:172] (0xc001720790) (0xc001db8960) Stream removed, broadcasting: 5 May 9 11:34:25.610: INFO: Exec stderr: "" May 9 11:34:25.610: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rccqc PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 11:34:25.610: INFO: >>> kubeConfig: /root/.kube/config I0509 11:34:25.646582 6 log.go:172] (0xc000b142c0) (0xc001db8be0) Create stream I0509 11:34:25.646606 6 log.go:172] (0xc000b142c0) (0xc001db8be0) Stream added, broadcasting: 1 I0509 11:34:25.648964 6 log.go:172] (0xc000b142c0) Reply frame received for 1 I0509 11:34:25.649005 6 log.go:172] (0xc000b142c0) (0xc0018848c0) Create stream I0509 11:34:25.649018 6 log.go:172] (0xc000b142c0) (0xc0018848c0) Stream added, broadcasting: 3 I0509 11:34:25.650304 6 log.go:172] (0xc000b142c0) Reply frame received for 3 I0509 11:34:25.650343 6 log.go:172] (0xc000b142c0) (0xc001884a00) Create stream I0509 11:34:25.650355 6 log.go:172] (0xc000b142c0) (0xc001884a00) Stream added, broadcasting: 5 I0509 11:34:25.651349 6 log.go:172] (0xc000b142c0) Reply frame received for 5 I0509 11:34:25.724412 6 log.go:172] (0xc000b142c0) Data frame received for 5 I0509 11:34:25.724446 6 log.go:172] (0xc001884a00) (5) Data frame handling I0509 11:34:25.724465 6 log.go:172] (0xc000b142c0) Data frame received for 3 I0509 11:34:25.724482 6 log.go:172] (0xc0018848c0) (3) Data frame handling I0509 11:34:25.724489 6 log.go:172] (0xc0018848c0) (3) Data frame sent I0509 11:34:25.724501 6 log.go:172] (0xc000b142c0) Data frame received for 3 I0509 11:34:25.724510 6 log.go:172] (0xc0018848c0) (3) Data frame handling I0509 11:34:25.725735 6 log.go:172] (0xc000b142c0) Data frame received for 1 I0509 11:34:25.725769 6 log.go:172] (0xc001db8be0) (1) Data frame handling I0509 11:34:25.725788 6 log.go:172] (0xc001db8be0) (1) Data frame sent I0509 11:34:25.725798 6 log.go:172] (0xc000b142c0) (0xc001db8be0) Stream removed, broadcasting: 1 I0509 11:34:25.725810 6 log.go:172] (0xc000b142c0) Go away received I0509 11:34:25.725896 6 log.go:172] (0xc000b142c0) (0xc001db8be0) Stream removed, broadcasting: 1 I0509 11:34:25.725911 6 log.go:172] (0xc000b142c0) (0xc0018848c0) Stream removed, broadcasting: 3 I0509 11:34:25.725922 6 log.go:172] (0xc000b142c0) (0xc001884a00) Stream removed, broadcasting: 5 May 9 11:34:25.725: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:34:25.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-rccqc" for this suite. May 9 11:35:15.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:35:15.804: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-rccqc, resource: bindings, ignored listing per whitelist May 9 11:35:15.830: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-rccqc deletion completed in 50.101514975s • [SLOW TEST:61.386 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:35:15.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-273cf6f7-91e9-11ea-a20c-0242ac110018 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:35:22.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qsvtg" for this suite. May 9 11:35:44.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:35:44.126: INFO: namespace: e2e-tests-configmap-qsvtg, resource: bindings, ignored listing per whitelist May 9 11:35:44.129: INFO: namespace e2e-tests-configmap-qsvtg deletion completed in 22.092551209s • [SLOW TEST:28.298 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:35:44.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-4wzk STEP: Creating a pod to test atomic-volume-subpath May 9 11:35:44.313: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4wzk" in namespace "e2e-tests-subpath-fhxm7" to be "success or failure" May 9 11:35:44.323: INFO: Pod "pod-subpath-test-secret-4wzk": Phase="Pending", Reason="", readiness=false. Elapsed: 9.573038ms May 9 11:35:46.326: INFO: Pod "pod-subpath-test-secret-4wzk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013333406s May 9 11:35:48.331: INFO: Pod "pod-subpath-test-secret-4wzk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017693794s May 9 11:35:50.335: INFO: Pod "pod-subpath-test-secret-4wzk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021895014s May 9 11:35:52.339: INFO: Pod "pod-subpath-test-secret-4wzk": Phase="Running", Reason="", readiness=false. Elapsed: 8.026278214s May 9 11:35:54.344: INFO: Pod "pod-subpath-test-secret-4wzk": Phase="Running", Reason="", readiness=false. Elapsed: 10.030650351s May 9 11:35:56.347: INFO: Pod "pod-subpath-test-secret-4wzk": Phase="Running", Reason="", readiness=false. Elapsed: 12.034459779s May 9 11:35:58.351: INFO: Pod "pod-subpath-test-secret-4wzk": Phase="Running", Reason="", readiness=false. Elapsed: 14.038329626s May 9 11:36:00.355: INFO: Pod "pod-subpath-test-secret-4wzk": Phase="Running", Reason="", readiness=false. Elapsed: 16.042160518s May 9 11:36:02.360: INFO: Pod "pod-subpath-test-secret-4wzk": Phase="Running", Reason="", readiness=false. Elapsed: 18.046643371s May 9 11:36:04.364: INFO: Pod "pod-subpath-test-secret-4wzk": Phase="Running", Reason="", readiness=false. Elapsed: 20.050674503s May 9 11:36:06.367: INFO: Pod "pod-subpath-test-secret-4wzk": Phase="Running", Reason="", readiness=false. Elapsed: 22.054337597s May 9 11:36:08.372: INFO: Pod "pod-subpath-test-secret-4wzk": Phase="Running", Reason="", readiness=false. Elapsed: 24.059073288s May 9 11:36:10.375: INFO: Pod "pod-subpath-test-secret-4wzk": Phase="Running", Reason="", readiness=false. Elapsed: 26.062375827s May 9 11:36:12.380: INFO: Pod "pod-subpath-test-secret-4wzk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.066586641s STEP: Saw pod success May 9 11:36:12.380: INFO: Pod "pod-subpath-test-secret-4wzk" satisfied condition "success or failure" May 9 11:36:12.383: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-4wzk container test-container-subpath-secret-4wzk: STEP: delete the pod May 9 11:36:12.557: INFO: Waiting for pod pod-subpath-test-secret-4wzk to disappear May 9 11:36:12.634: INFO: Pod pod-subpath-test-secret-4wzk no longer exists STEP: Deleting pod pod-subpath-test-secret-4wzk May 9 11:36:12.634: INFO: Deleting pod "pod-subpath-test-secret-4wzk" in namespace "e2e-tests-subpath-fhxm7" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:36:12.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-fhxm7" for this suite. May 9 11:36:18.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:36:18.745: INFO: namespace: e2e-tests-subpath-fhxm7, resource: bindings, ignored listing per whitelist May 9 11:36:18.787: INFO: namespace e2e-tests-subpath-fhxm7 deletion completed in 6.147140448s • [SLOW TEST:34.659 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:36:18.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:36:25.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-2plwg" for this suite. May 9 11:36:49.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:36:49.976: INFO: namespace: e2e-tests-replication-controller-2plwg, resource: bindings, ignored listing per whitelist May 9 11:36:50.019: INFO: namespace e2e-tests-replication-controller-2plwg deletion completed in 24.10726242s • [SLOW TEST:31.232 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:36:50.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 9 11:36:54.718: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5f5cdba0-91e9-11ea-a20c-0242ac110018" May 9 11:36:54.718: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5f5cdba0-91e9-11ea-a20c-0242ac110018" in namespace "e2e-tests-pods-hh2dn" to be "terminated due to deadline exceeded" May 9 11:36:54.787: INFO: Pod "pod-update-activedeadlineseconds-5f5cdba0-91e9-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 69.712641ms May 9 11:36:56.871: INFO: Pod "pod-update-activedeadlineseconds-5f5cdba0-91e9-11ea-a20c-0242ac110018": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.153762691s May 9 11:36:56.872: INFO: Pod "pod-update-activedeadlineseconds-5f5cdba0-91e9-11ea-a20c-0242ac110018" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:36:56.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-hh2dn" for this suite. May 9 11:37:03.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:37:03.049: INFO: namespace: e2e-tests-pods-hh2dn, resource: bindings, ignored listing per whitelist May 9 11:37:03.085: INFO: namespace e2e-tests-pods-hh2dn deletion completed in 6.20973331s • [SLOW TEST:13.066 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:37:03.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 9 11:37:03.202: INFO: Waiting up to 5m0s for pod "pod-6722b6e8-91e9-11ea-a20c-0242ac110018" in namespace "e2e-tests-emptydir-k7z7p" to be "success or failure" May 9 11:37:03.234: INFO: Pod "pod-6722b6e8-91e9-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 32.008141ms May 9 11:37:05.238: INFO: Pod "pod-6722b6e8-91e9-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0357766s May 9 11:37:07.241: INFO: Pod "pod-6722b6e8-91e9-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.039135835s May 9 11:37:09.245: INFO: Pod "pod-6722b6e8-91e9-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043034635s STEP: Saw pod success May 9 11:37:09.245: INFO: Pod "pod-6722b6e8-91e9-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:37:09.248: INFO: Trying to get logs from node hunter-worker pod pod-6722b6e8-91e9-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 11:37:09.322: INFO: Waiting for pod pod-6722b6e8-91e9-11ea-a20c-0242ac110018 to disappear May 9 11:37:09.348: INFO: Pod pod-6722b6e8-91e9-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:37:09.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-k7z7p" for this suite. May 9 11:37:15.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:37:15.427: INFO: namespace: e2e-tests-emptydir-k7z7p, resource: bindings, ignored listing per whitelist May 9 11:37:15.464: INFO: namespace e2e-tests-emptydir-k7z7p deletion completed in 6.11256178s • [SLOW TEST:12.379 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:37:15.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 9 11:37:20.150: INFO: Successfully updated pod "annotationupdate6e88f580-91e9-11ea-a20c-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:37:22.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fzwlx" for this suite. May 9 11:37:44.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:37:44.219: INFO: namespace: e2e-tests-projected-fzwlx, resource: bindings, ignored listing per whitelist May 9 11:37:44.282: INFO: namespace e2e-tests-projected-fzwlx deletion completed in 22.098413227s • [SLOW TEST:28.817 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:37:44.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 11:37:44.387: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7fae6471-91e9-11ea-a20c-0242ac110018" in namespace "e2e-tests-downward-api-87mx7" to be "success or failure" May 9 11:37:44.391: INFO: Pod "downwardapi-volume-7fae6471-91e9-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.305639ms May 9 11:37:46.429: INFO: Pod "downwardapi-volume-7fae6471-91e9-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041340951s May 9 11:37:48.615: INFO: Pod "downwardapi-volume-7fae6471-91e9-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.227825441s May 9 11:37:50.619: INFO: Pod "downwardapi-volume-7fae6471-91e9-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.23180981s STEP: Saw pod success May 9 11:37:50.619: INFO: Pod "downwardapi-volume-7fae6471-91e9-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:37:50.622: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7fae6471-91e9-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 11:37:50.694: INFO: Waiting for pod downwardapi-volume-7fae6471-91e9-11ea-a20c-0242ac110018 to disappear May 9 11:37:50.708: INFO: Pod downwardapi-volume-7fae6471-91e9-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:37:50.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-87mx7" for this suite. May 9 11:37:56.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:37:56.780: INFO: namespace: e2e-tests-downward-api-87mx7, resource: bindings, ignored listing per whitelist May 9 11:37:56.797: INFO: namespace e2e-tests-downward-api-87mx7 deletion completed in 6.086050794s • [SLOW TEST:12.515 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:37:56.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-cnlpr May 9 11:38:00.958: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-cnlpr STEP: checking the pod's current state and verifying that restartCount is present May 9 11:38:00.961: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:42:01.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-cnlpr" for this suite. May 9 11:42:07.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:42:07.780: INFO: namespace: e2e-tests-container-probe-cnlpr, resource: bindings, ignored listing per whitelist May 9 11:42:07.879: INFO: namespace e2e-tests-container-probe-cnlpr deletion completed in 6.241049886s • [SLOW TEST:251.081 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:42:07.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:42:14.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-rl4qr" for this suite. May 9 11:42:54.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:42:54.093: INFO: namespace: e2e-tests-kubelet-test-rl4qr, resource: bindings, ignored listing per whitelist May 9 11:42:54.153: INFO: namespace e2e-tests-kubelet-test-rl4qr deletion completed in 40.088137359s • [SLOW TEST:46.274 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:42:54.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-tpncp STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-tpncp to expose endpoints map[] May 9 11:42:54.340: INFO: Get endpoints failed (12.688846ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 9 11:42:55.345: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-tpncp exposes endpoints map[] (1.017697749s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-tpncp STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-tpncp to expose endpoints map[pod1:[100]] May 9 11:42:59.406: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.055138136s elapsed, will retry) May 9 11:43:00.412: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-tpncp exposes endpoints map[pod1:[100]] (5.061367242s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-tpncp STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-tpncp to expose endpoints map[pod1:[100] pod2:[101]] May 9 11:43:04.504: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-tpncp exposes endpoints map[pod1:[100] pod2:[101]] (4.087627348s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-tpncp STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-tpncp to expose endpoints map[pod2:[101]] May 9 11:43:05.526: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-tpncp exposes endpoints map[pod2:[101]] (1.017713979s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-tpncp STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-tpncp to expose endpoints map[] May 9 11:43:06.584: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-tpncp exposes endpoints map[] (1.053871418s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:43:06.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-tpncp" for this suite. May 9 11:43:12.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:43:12.855: INFO: namespace: e2e-tests-services-tpncp, resource: bindings, ignored listing per whitelist May 9 11:43:12.918: INFO: namespace e2e-tests-services-tpncp deletion completed in 6.138664348s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:18.765 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:43:12.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-439a3d35-91ea-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 11:43:13.090: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-439afde6-91ea-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-bxct5" to be "success or failure" May 9 11:43:13.094: INFO: Pod "pod-projected-configmaps-439afde6-91ea-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.648961ms May 9 11:43:15.345: INFO: Pod "pod-projected-configmaps-439afde6-91ea-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255374124s May 9 11:43:17.348: INFO: Pod "pod-projected-configmaps-439afde6-91ea-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.258768587s STEP: Saw pod success May 9 11:43:17.348: INFO: Pod "pod-projected-configmaps-439afde6-91ea-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:43:17.351: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-439afde6-91ea-11ea-a20c-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 9 11:43:17.436: INFO: Waiting for pod pod-projected-configmaps-439afde6-91ea-11ea-a20c-0242ac110018 to disappear May 9 11:43:17.447: INFO: Pod pod-projected-configmaps-439afde6-91ea-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:43:17.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bxct5" for this suite. May 9 11:43:23.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:43:23.752: INFO: namespace: e2e-tests-projected-bxct5, resource: bindings, ignored listing per whitelist May 9 11:43:23.778: INFO: namespace e2e-tests-projected-bxct5 deletion completed in 6.327326678s • [SLOW TEST:10.860 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:43:23.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:43:27.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-ncbz7" for this suite. May 9 11:44:13.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:44:13.992: INFO: namespace: e2e-tests-kubelet-test-ncbz7, resource: bindings, ignored listing per whitelist May 9 11:44:14.063: INFO: namespace e2e-tests-kubelet-test-ncbz7 deletion completed in 46.092471169s • [SLOW TEST:50.285 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:44:14.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 9 11:44:18.223: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:44:42.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-n8xhn" for this suite. May 9 11:44:48.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:44:48.530: INFO: namespace: e2e-tests-namespaces-n8xhn, resource: bindings, ignored listing per whitelist May 9 11:44:48.725: INFO: namespace e2e-tests-namespaces-n8xhn deletion completed in 6.224466677s STEP: Destroying namespace "e2e-tests-nsdeletetest-vvnvl" for this suite. May 9 11:44:48.727: INFO: Namespace e2e-tests-nsdeletetest-vvnvl was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-bpwrz" for this suite. May 9 11:44:54.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:44:54.806: INFO: namespace: e2e-tests-nsdeletetest-bpwrz, resource: bindings, ignored listing per whitelist May 9 11:44:54.809: INFO: namespace e2e-tests-nsdeletetest-bpwrz deletion completed in 6.08204705s • [SLOW TEST:40.745 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:44:54.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-804d728a-91ea-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 11:44:54.921: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-804dc4d0-91ea-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-hjnbd" to be "success or failure" May 9 11:44:54.934: INFO: Pod "pod-projected-configmaps-804dc4d0-91ea-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.843478ms May 9 11:44:56.986: INFO: Pod "pod-projected-configmaps-804dc4d0-91ea-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065829381s May 9 11:44:59.100: INFO: Pod "pod-projected-configmaps-804dc4d0-91ea-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.179749184s STEP: Saw pod success May 9 11:44:59.100: INFO: Pod "pod-projected-configmaps-804dc4d0-91ea-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:44:59.103: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-804dc4d0-91ea-11ea-a20c-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 9 11:44:59.156: INFO: Waiting for pod pod-projected-configmaps-804dc4d0-91ea-11ea-a20c-0242ac110018 to disappear May 9 11:44:59.262: INFO: Pod pod-projected-configmaps-804dc4d0-91ea-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:44:59.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hjnbd" for this suite. May 9 11:45:05.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:45:05.351: INFO: namespace: e2e-tests-projected-hjnbd, resource: bindings, ignored listing per whitelist May 9 11:45:05.366: INFO: namespace e2e-tests-projected-hjnbd deletion completed in 6.100630652s • [SLOW TEST:10.556 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:45:05.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 9 11:45:05.420: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 9 11:45:05.440: INFO: Waiting for terminating namespaces to be deleted... May 9 11:45:05.442: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 9 11:45:05.447: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 9 11:45:05.447: INFO: Container coredns ready: true, restart count 0 May 9 11:45:05.447: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 9 11:45:05.447: INFO: Container kube-proxy ready: true, restart count 0 May 9 11:45:05.447: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 9 11:45:05.447: INFO: Container kindnet-cni ready: true, restart count 0 May 9 11:45:05.447: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 9 11:45:05.452: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 9 11:45:05.452: INFO: Container kindnet-cni ready: true, restart count 0 May 9 11:45:05.452: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 9 11:45:05.452: INFO: Container coredns ready: true, restart count 0 May 9 11:45:05.452: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 9 11:45:05.452: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160d598cf495a828], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:45:06.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-xwhf8" for this suite. May 9 11:45:12.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:45:12.537: INFO: namespace: e2e-tests-sched-pred-xwhf8, resource: bindings, ignored listing per whitelist May 9 11:45:12.565: INFO: namespace e2e-tests-sched-pred-xwhf8 deletion completed in 6.092909164s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.199 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:45:12.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 9 11:45:20.754: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 9 11:45:20.757: INFO: Pod pod-with-prestop-exec-hook still exists May 9 11:45:22.757: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 9 11:45:22.762: INFO: Pod pod-with-prestop-exec-hook still exists May 9 11:45:24.757: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 9 11:45:24.760: INFO: Pod pod-with-prestop-exec-hook still exists May 9 11:45:26.757: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 9 11:45:26.762: INFO: Pod pod-with-prestop-exec-hook still exists May 9 11:45:28.757: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 9 11:45:28.761: INFO: Pod pod-with-prestop-exec-hook still exists May 9 11:45:30.757: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 9 11:45:30.778: INFO: Pod pod-with-prestop-exec-hook still exists May 9 11:45:32.757: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 9 11:45:32.771: INFO: Pod pod-with-prestop-exec-hook still exists May 9 11:45:34.757: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 9 11:45:34.874: INFO: Pod pod-with-prestop-exec-hook still exists May 9 11:45:36.757: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 9 11:45:36.760: INFO: Pod pod-with-prestop-exec-hook still exists May 9 11:45:38.757: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 9 11:45:38.792: INFO: Pod pod-with-prestop-exec-hook still exists May 9 11:45:40.757: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 9 11:45:40.763: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:45:40.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-2l9zg" for this suite. May 9 11:46:04.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:46:04.838: INFO: namespace: e2e-tests-container-lifecycle-hook-2l9zg, resource: bindings, ignored listing per whitelist May 9 11:46:04.885: INFO: namespace e2e-tests-container-lifecycle-hook-2l9zg deletion completed in 24.113086058s • [SLOW TEST:52.320 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:46:04.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 11:46:06.105: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aa9b9d43-91ea-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-bz9s7" to be "success or failure" May 9 11:46:06.755: INFO: Pod "downwardapi-volume-aa9b9d43-91ea-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 649.837992ms May 9 11:46:08.827: INFO: Pod "downwardapi-volume-aa9b9d43-91ea-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.722252404s May 9 11:46:11.114: INFO: Pod "downwardapi-volume-aa9b9d43-91ea-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.008899945s May 9 11:46:13.240: INFO: Pod "downwardapi-volume-aa9b9d43-91ea-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 7.134722582s May 9 11:46:15.244: INFO: Pod "downwardapi-volume-aa9b9d43-91ea-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.139001791s STEP: Saw pod success May 9 11:46:15.244: INFO: Pod "downwardapi-volume-aa9b9d43-91ea-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:46:15.248: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-aa9b9d43-91ea-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 11:46:15.487: INFO: Waiting for pod downwardapi-volume-aa9b9d43-91ea-11ea-a20c-0242ac110018 to disappear May 9 11:46:15.533: INFO: Pod downwardapi-volume-aa9b9d43-91ea-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:46:15.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bz9s7" for this suite. May 9 11:46:21.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:46:21.678: INFO: namespace: e2e-tests-projected-bz9s7, resource: bindings, ignored listing per whitelist May 9 11:46:21.727: INFO: namespace e2e-tests-projected-bz9s7 deletion completed in 6.189984388s • [SLOW TEST:16.842 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:46:21.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-b4582648-91ea-11ea-a20c-0242ac110018 STEP: Creating secret with name s-test-opt-upd-b45826da-91ea-11ea-a20c-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b4582648-91ea-11ea-a20c-0242ac110018 STEP: Updating secret s-test-opt-upd-b45826da-91ea-11ea-a20c-0242ac110018 STEP: Creating secret with name s-test-opt-create-b458270b-91ea-11ea-a20c-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:47:40.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nfqmx" for this suite. May 9 11:48:05.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:48:05.564: INFO: namespace: e2e-tests-secrets-nfqmx, resource: bindings, ignored listing per whitelist May 9 11:48:05.588: INFO: namespace e2e-tests-secrets-nfqmx deletion completed in 24.68784812s • [SLOW TEST:103.861 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:48:05.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-r84df/configmap-test-f251f96a-91ea-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 11:48:06.260: INFO: Waiting up to 5m0s for pod "pod-configmaps-f254846b-91ea-11ea-a20c-0242ac110018" in namespace "e2e-tests-configmap-r84df" to be "success or failure" May 9 11:48:06.343: INFO: Pod "pod-configmaps-f254846b-91ea-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 82.515524ms May 9 11:48:08.503: INFO: Pod "pod-configmaps-f254846b-91ea-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24309538s May 9 11:48:10.508: INFO: Pod "pod-configmaps-f254846b-91ea-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.247308782s STEP: Saw pod success May 9 11:48:10.508: INFO: Pod "pod-configmaps-f254846b-91ea-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:48:10.511: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-f254846b-91ea-11ea-a20c-0242ac110018 container env-test: STEP: delete the pod May 9 11:48:10.536: INFO: Waiting for pod pod-configmaps-f254846b-91ea-11ea-a20c-0242ac110018 to disappear May 9 11:48:10.541: INFO: Pod pod-configmaps-f254846b-91ea-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:48:10.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-r84df" for this suite. May 9 11:48:20.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:48:20.772: INFO: namespace: e2e-tests-configmap-r84df, resource: bindings, ignored listing per whitelist May 9 11:48:20.817: INFO: namespace e2e-tests-configmap-r84df deletion completed in 10.273342336s • [SLOW TEST:15.229 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:48:20.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 11:48:45.076: INFO: Container started at 2020-05-09 11:48:23 +0000 UTC, pod became ready at 2020-05-09 11:48:43 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:48:45.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-7dcpt" for this suite. May 9 11:49:09.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:49:09.109: INFO: namespace: e2e-tests-container-probe-7dcpt, resource: bindings, ignored listing per whitelist May 9 11:49:09.190: INFO: namespace e2e-tests-container-probe-7dcpt deletion completed in 24.110038451s • [SLOW TEST:48.372 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:49:09.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 9 11:49:16.383: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:49:17.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-9jz6f" for this suite. May 9 11:49:39.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:49:39.663: INFO: namespace: e2e-tests-replicaset-9jz6f, resource: bindings, ignored listing per whitelist May 9 11:49:39.683: INFO: namespace e2e-tests-replicaset-9jz6f deletion completed in 22.085333351s • [SLOW TEST:30.493 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:49:39.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 11:49:39.820: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2a1cd402-91eb-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-5v7cd" to be "success or failure" May 9 11:49:39.840: INFO: Pod "downwardapi-volume-2a1cd402-91eb-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.706751ms May 9 11:49:41.897: INFO: Pod "downwardapi-volume-2a1cd402-91eb-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077411211s May 9 11:49:43.913: INFO: Pod "downwardapi-volume-2a1cd402-91eb-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093243063s STEP: Saw pod success May 9 11:49:43.913: INFO: Pod "downwardapi-volume-2a1cd402-91eb-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:49:43.915: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-2a1cd402-91eb-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 11:49:43.938: INFO: Waiting for pod downwardapi-volume-2a1cd402-91eb-11ea-a20c-0242ac110018 to disappear May 9 11:49:43.978: INFO: Pod downwardapi-volume-2a1cd402-91eb-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:49:43.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5v7cd" for this suite. May 9 11:49:49.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:49:50.066: INFO: namespace: e2e-tests-projected-5v7cd, resource: bindings, ignored listing per whitelist May 9 11:49:50.075: INFO: namespace e2e-tests-projected-5v7cd deletion completed in 6.094220676s • [SLOW TEST:10.391 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:49:50.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 9 11:49:50.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-kmcjp' May 9 11:49:52.659: INFO: stderr: "" May 9 11:49:52.659: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 9 11:49:52.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-kmcjp' May 9 11:50:01.267: INFO: stderr: "" May 9 11:50:01.267: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:50:01.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kmcjp" for this suite. May 9 11:50:07.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:50:07.456: INFO: namespace: e2e-tests-kubectl-kmcjp, resource: bindings, ignored listing per whitelist May 9 11:50:07.491: INFO: namespace e2e-tests-kubectl-kmcjp deletion completed in 6.215338306s • [SLOW TEST:17.415 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:50:07.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-3aa9b343-91eb-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 11:50:07.605: INFO: Waiting up to 5m0s for pod "pod-configmaps-3aabdb23-91eb-11ea-a20c-0242ac110018" in namespace "e2e-tests-configmap-grnhp" to be "success or failure" May 9 11:50:07.609: INFO: Pod "pod-configmaps-3aabdb23-91eb-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125517ms May 9 11:50:09.613: INFO: Pod "pod-configmaps-3aabdb23-91eb-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008257426s May 9 11:50:11.617: INFO: Pod "pod-configmaps-3aabdb23-91eb-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.012552275s May 9 11:50:13.622: INFO: Pod "pod-configmaps-3aabdb23-91eb-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017254241s STEP: Saw pod success May 9 11:50:13.622: INFO: Pod "pod-configmaps-3aabdb23-91eb-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:50:13.625: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-3aabdb23-91eb-11ea-a20c-0242ac110018 container configmap-volume-test: STEP: delete the pod May 9 11:50:13.647: INFO: Waiting for pod pod-configmaps-3aabdb23-91eb-11ea-a20c-0242ac110018 to disappear May 9 11:50:13.651: INFO: Pod pod-configmaps-3aabdb23-91eb-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:50:13.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-grnhp" for this suite. May 9 11:50:19.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:50:19.693: INFO: namespace: e2e-tests-configmap-grnhp, resource: bindings, ignored listing per whitelist May 9 11:50:19.739: INFO: namespace e2e-tests-configmap-grnhp deletion completed in 6.084365796s • [SLOW TEST:12.248 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:50:19.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:50:19.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-ltf5w" for this suite. May 9 11:50:25.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:50:25.943: INFO: namespace: e2e-tests-services-ltf5w, resource: bindings, ignored listing per whitelist May 9 11:50:25.950: INFO: namespace e2e-tests-services-ltf5w deletion completed in 6.093812701s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.211 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:50:25.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-dnxdc May 9 11:50:30.085: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-dnxdc STEP: checking the pod's current state and verifying that restartCount is present May 9 11:50:30.089: INFO: Initial restart count of pod liveness-exec is 0 May 9 11:51:18.327: INFO: Restart count of pod e2e-tests-container-probe-dnxdc/liveness-exec is now 1 (48.238461727s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:51:18.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-dnxdc" for this suite. May 9 11:51:24.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:51:24.448: INFO: namespace: e2e-tests-container-probe-dnxdc, resource: bindings, ignored listing per whitelist May 9 11:51:24.478: INFO: namespace e2e-tests-container-probe-dnxdc deletion completed in 6.106777794s • [SLOW TEST:58.528 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:51:24.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 11:51:24.594: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:51:28.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-f9hhf" for this suite. May 9 11:52:06.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:52:06.696: INFO: namespace: e2e-tests-pods-f9hhf, resource: bindings, ignored listing per whitelist May 9 11:52:06.765: INFO: namespace e2e-tests-pods-f9hhf deletion completed in 38.10427239s • [SLOW TEST:42.287 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:52:06.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 9 11:52:06.899: INFO: PodSpec: initContainers in spec.initContainers May 9 11:52:54.576: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-81c8e5bf-91eb-11ea-a20c-0242ac110018", GenerateName:"", Namespace:"e2e-tests-init-container-cbljb", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-cbljb/pods/pod-init-81c8e5bf-91eb-11ea-a20c-0242ac110018", UID:"81cc8e5a-91eb-11ea-99e8-0242ac110002", ResourceVersion:"9585395", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724621926, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"899050546"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-t4nz4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002045d80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-t4nz4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-t4nz4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-t4nz4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001325eb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0017be4e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001325f40)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001325f60)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001325f68), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001325f6c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724621927, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724621927, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724621927, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724621926, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.129", StartTime:(*v1.Time)(0xc0011fe7c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0011fe800), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001896c40)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://dd8da8858ba67b8135ce9cac98ead822d455d9ea95ce98f8a1d09d7f862bdb9d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0011fe820), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0011fe7e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:52:54.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-cbljb" for this suite. May 9 11:53:16.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:53:16.665: INFO: namespace: e2e-tests-init-container-cbljb, resource: bindings, ignored listing per whitelist May 9 11:53:16.682: INFO: namespace e2e-tests-init-container-cbljb deletion completed in 22.099246154s • [SLOW TEST:69.917 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:53:16.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 11:53:16.822: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 9 11:53:16.827: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-rfs68/daemonsets","resourceVersion":"9585459"},"items":null} May 9 11:53:16.830: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-rfs68/pods","resourceVersion":"9585459"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:53:16.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-rfs68" for this suite. May 9 11:53:22.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:53:22.888: INFO: namespace: e2e-tests-daemonsets-rfs68, resource: bindings, ignored listing per whitelist May 9 11:53:22.929: INFO: namespace e2e-tests-daemonsets-rfs68 deletion completed in 6.088347358s S [SKIPPING] [6.246 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 11:53:16.822: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:53:22.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 9 11:53:27.084: INFO: Pod pod-hostip-af28a0a3-91eb-11ea-a20c-0242ac110018 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:53:27.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-klsnx" for this suite. May 9 11:53:49.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:53:49.156: INFO: namespace: e2e-tests-pods-klsnx, resource: bindings, ignored listing per whitelist May 9 11:53:49.180: INFO: namespace e2e-tests-pods-klsnx deletion completed in 22.090764587s • [SLOW TEST:26.251 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:53:49.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 9 11:53:49.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-d8nfv' May 9 11:53:49.957: INFO: stderr: "" May 9 11:53:49.957: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 9 11:53:49.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d8nfv' May 9 11:53:50.107: INFO: stderr: "" May 9 11:53:50.107: INFO: stdout: "update-demo-nautilus-fh69v update-demo-nautilus-mnfj8 " May 9 11:53:50.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fh69v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d8nfv' May 9 11:53:50.228: INFO: stderr: "" May 9 11:53:50.228: INFO: stdout: "" May 9 11:53:50.228: INFO: update-demo-nautilus-fh69v is created but not running May 9 11:53:55.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d8nfv' May 9 11:53:55.335: INFO: stderr: "" May 9 11:53:55.335: INFO: stdout: "update-demo-nautilus-fh69v update-demo-nautilus-mnfj8 " May 9 11:53:55.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fh69v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d8nfv' May 9 11:53:55.435: INFO: stderr: "" May 9 11:53:55.435: INFO: stdout: "" May 9 11:53:55.435: INFO: update-demo-nautilus-fh69v is created but not running May 9 11:54:00.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d8nfv' May 9 11:54:00.549: INFO: stderr: "" May 9 11:54:00.549: INFO: stdout: "update-demo-nautilus-fh69v update-demo-nautilus-mnfj8 " May 9 11:54:00.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fh69v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d8nfv' May 9 11:54:00.654: INFO: stderr: "" May 9 11:54:00.654: INFO: stdout: "true" May 9 11:54:00.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fh69v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d8nfv' May 9 11:54:00.761: INFO: stderr: "" May 9 11:54:00.761: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 11:54:00.761: INFO: validating pod update-demo-nautilus-fh69v May 9 11:54:00.765: INFO: got data: { "image": "nautilus.jpg" } May 9 11:54:00.765: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 11:54:00.765: INFO: update-demo-nautilus-fh69v is verified up and running May 9 11:54:00.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mnfj8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d8nfv' May 9 11:54:00.865: INFO: stderr: "" May 9 11:54:00.865: INFO: stdout: "true" May 9 11:54:00.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mnfj8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d8nfv' May 9 11:54:00.969: INFO: stderr: "" May 9 11:54:00.969: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 9 11:54:00.969: INFO: validating pod update-demo-nautilus-mnfj8 May 9 11:54:00.973: INFO: got data: { "image": "nautilus.jpg" } May 9 11:54:00.973: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 9 11:54:00.973: INFO: update-demo-nautilus-mnfj8 is verified up and running STEP: rolling-update to new replication controller May 9 11:54:00.975: INFO: scanned /root for discovery docs: May 9 11:54:00.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-d8nfv' May 9 11:54:25.706: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 9 11:54:25.706: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 9 11:54:25.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d8nfv' May 9 11:54:25.813: INFO: stderr: "" May 9 11:54:25.813: INFO: stdout: "update-demo-kitten-qvzls update-demo-kitten-vvzbr " May 9 11:54:25.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qvzls -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d8nfv' May 9 11:54:25.918: INFO: stderr: "" May 9 11:54:25.918: INFO: stdout: "true" May 9 11:54:25.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qvzls -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d8nfv' May 9 11:54:26.024: INFO: stderr: "" May 9 11:54:26.024: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 9 11:54:26.024: INFO: validating pod update-demo-kitten-qvzls May 9 11:54:26.034: INFO: got data: { "image": "kitten.jpg" } May 9 11:54:26.034: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 9 11:54:26.034: INFO: update-demo-kitten-qvzls is verified up and running May 9 11:54:26.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vvzbr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d8nfv' May 9 11:54:26.124: INFO: stderr: "" May 9 11:54:26.124: INFO: stdout: "true" May 9 11:54:26.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vvzbr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d8nfv' May 9 11:54:26.224: INFO: stderr: "" May 9 11:54:26.224: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 9 11:54:26.224: INFO: validating pod update-demo-kitten-vvzbr May 9 11:54:26.228: INFO: got data: { "image": "kitten.jpg" } May 9 11:54:26.228: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 9 11:54:26.228: INFO: update-demo-kitten-vvzbr is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:54:26.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d8nfv" for this suite. May 9 11:54:50.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:54:50.268: INFO: namespace: e2e-tests-kubectl-d8nfv, resource: bindings, ignored listing per whitelist May 9 11:54:50.332: INFO: namespace e2e-tests-kubectl-d8nfv deletion completed in 24.100681443s • [SLOW TEST:61.152 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:54:50.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 9 11:54:50.402: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:54:58.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-v88kh" for this suite. May 9 11:55:04.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:55:04.089: INFO: namespace: e2e-tests-init-container-v88kh, resource: bindings, ignored listing per whitelist May 9 11:55:04.122: INFO: namespace e2e-tests-init-container-v88kh deletion completed in 6.089991747s • [SLOW TEST:13.790 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:55:04.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 9 11:55:04.254: INFO: Waiting up to 5m0s for pod "pod-eb7a1109-91eb-11ea-a20c-0242ac110018" in namespace "e2e-tests-emptydir-4nrmd" to be "success or failure" May 9 11:55:04.261: INFO: Pod "pod-eb7a1109-91eb-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.791113ms May 9 11:55:06.265: INFO: Pod "pod-eb7a1109-91eb-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010970362s May 9 11:55:08.269: INFO: Pod "pod-eb7a1109-91eb-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014955439s STEP: Saw pod success May 9 11:55:08.269: INFO: Pod "pod-eb7a1109-91eb-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:55:08.272: INFO: Trying to get logs from node hunter-worker2 pod pod-eb7a1109-91eb-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 11:55:08.302: INFO: Waiting for pod pod-eb7a1109-91eb-11ea-a20c-0242ac110018 to disappear May 9 11:55:08.304: INFO: Pod pod-eb7a1109-91eb-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:55:08.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4nrmd" for this suite. May 9 11:55:14.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:55:14.378: INFO: namespace: e2e-tests-emptydir-4nrmd, resource: bindings, ignored listing per whitelist May 9 11:55:14.420: INFO: namespace e2e-tests-emptydir-4nrmd deletion completed in 6.113543523s • [SLOW TEST:10.297 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:55:14.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 9 11:55:14.491: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix207002809/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:55:14.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mjv9f" for this suite. May 9 11:55:20.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:55:20.605: INFO: namespace: e2e-tests-kubectl-mjv9f, resource: bindings, ignored listing per whitelist May 9 11:55:20.653: INFO: namespace e2e-tests-kubectl-mjv9f deletion completed in 6.097984634s • [SLOW TEST:6.233 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:55:20.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-k6gx STEP: Creating a pod to test atomic-volume-subpath May 9 11:55:20.798: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-k6gx" in namespace "e2e-tests-subpath-c4tvb" to be "success or failure" May 9 11:55:20.801: INFO: Pod "pod-subpath-test-configmap-k6gx": Phase="Pending", Reason="", readiness=false. Elapsed: 3.598519ms May 9 11:55:22.943: INFO: Pod "pod-subpath-test-configmap-k6gx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145189054s May 9 11:55:24.948: INFO: Pod "pod-subpath-test-configmap-k6gx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150255438s May 9 11:55:26.953: INFO: Pod "pod-subpath-test-configmap-k6gx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15519413s May 9 11:55:28.957: INFO: Pod "pod-subpath-test-configmap-k6gx": Phase="Running", Reason="", readiness=false. Elapsed: 8.159306011s May 9 11:55:30.961: INFO: Pod "pod-subpath-test-configmap-k6gx": Phase="Running", Reason="", readiness=false. Elapsed: 10.16373698s May 9 11:55:32.966: INFO: Pod "pod-subpath-test-configmap-k6gx": Phase="Running", Reason="", readiness=false. Elapsed: 12.168396185s May 9 11:55:34.970: INFO: Pod "pod-subpath-test-configmap-k6gx": Phase="Running", Reason="", readiness=false. Elapsed: 14.172529173s May 9 11:55:36.974: INFO: Pod "pod-subpath-test-configmap-k6gx": Phase="Running", Reason="", readiness=false. Elapsed: 16.176611001s May 9 11:55:38.979: INFO: Pod "pod-subpath-test-configmap-k6gx": Phase="Running", Reason="", readiness=false. Elapsed: 18.181644355s May 9 11:55:40.984: INFO: Pod "pod-subpath-test-configmap-k6gx": Phase="Running", Reason="", readiness=false. Elapsed: 20.186402748s May 9 11:55:42.989: INFO: Pod "pod-subpath-test-configmap-k6gx": Phase="Running", Reason="", readiness=false. Elapsed: 22.19134454s May 9 11:55:44.993: INFO: Pod "pod-subpath-test-configmap-k6gx": Phase="Running", Reason="", readiness=false. Elapsed: 24.195367471s May 9 11:55:46.997: INFO: Pod "pod-subpath-test-configmap-k6gx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.199832355s STEP: Saw pod success May 9 11:55:46.997: INFO: Pod "pod-subpath-test-configmap-k6gx" satisfied condition "success or failure" May 9 11:55:47.000: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-k6gx container test-container-subpath-configmap-k6gx: STEP: delete the pod May 9 11:55:47.132: INFO: Waiting for pod pod-subpath-test-configmap-k6gx to disappear May 9 11:55:47.143: INFO: Pod pod-subpath-test-configmap-k6gx no longer exists STEP: Deleting pod pod-subpath-test-configmap-k6gx May 9 11:55:47.143: INFO: Deleting pod "pod-subpath-test-configmap-k6gx" in namespace "e2e-tests-subpath-c4tvb" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:55:47.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-c4tvb" for this suite. May 9 11:55:53.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:55:53.263: INFO: namespace: e2e-tests-subpath-c4tvb, resource: bindings, ignored listing per whitelist May 9 11:55:53.270: INFO: namespace e2e-tests-subpath-c4tvb deletion completed in 6.121425811s • [SLOW TEST:32.616 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:55:53.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 9 11:55:53.391: INFO: Waiting up to 5m0s for pod "downward-api-08c8133d-91ec-11ea-a20c-0242ac110018" in namespace "e2e-tests-downward-api-x7wdb" to be "success or failure" May 9 11:55:53.407: INFO: Pod "downward-api-08c8133d-91ec-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.93586ms May 9 11:55:55.410: INFO: Pod "downward-api-08c8133d-91ec-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019288108s May 9 11:55:57.414: INFO: Pod "downward-api-08c8133d-91ec-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02318638s May 9 11:55:59.418: INFO: Pod "downward-api-08c8133d-91ec-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027110448s STEP: Saw pod success May 9 11:55:59.418: INFO: Pod "downward-api-08c8133d-91ec-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:55:59.421: INFO: Trying to get logs from node hunter-worker2 pod downward-api-08c8133d-91ec-11ea-a20c-0242ac110018 container dapi-container: STEP: delete the pod May 9 11:55:59.443: INFO: Waiting for pod downward-api-08c8133d-91ec-11ea-a20c-0242ac110018 to disappear May 9 11:55:59.448: INFO: Pod downward-api-08c8133d-91ec-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:55:59.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-x7wdb" for this suite. May 9 11:56:05.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:56:05.800: INFO: namespace: e2e-tests-downward-api-x7wdb, resource: bindings, ignored listing per whitelist May 9 11:56:05.831: INFO: namespace e2e-tests-downward-api-x7wdb deletion completed in 6.379929906s • [SLOW TEST:12.562 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:56:05.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 11:56:05.965: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1046a0d1-91ec-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-254kn" to be "success or failure" May 9 11:56:06.051: INFO: Pod "downwardapi-volume-1046a0d1-91ec-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 85.065373ms May 9 11:56:08.092: INFO: Pod "downwardapi-volume-1046a0d1-91ec-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12687801s May 9 11:56:10.097: INFO: Pod "downwardapi-volume-1046a0d1-91ec-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.131531444s May 9 11:56:12.101: INFO: Pod "downwardapi-volume-1046a0d1-91ec-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.135533782s STEP: Saw pod success May 9 11:56:12.101: INFO: Pod "downwardapi-volume-1046a0d1-91ec-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:56:12.104: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-1046a0d1-91ec-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 11:56:12.166: INFO: Waiting for pod downwardapi-volume-1046a0d1-91ec-11ea-a20c-0242ac110018 to disappear May 9 11:56:12.174: INFO: Pod downwardapi-volume-1046a0d1-91ec-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:56:12.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-254kn" for this suite. May 9 11:56:18.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:56:18.233: INFO: namespace: e2e-tests-projected-254kn, resource: bindings, ignored listing per whitelist May 9 11:56:18.274: INFO: namespace e2e-tests-projected-254kn deletion completed in 6.097413384s • [SLOW TEST:12.443 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:56:18.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 9 11:56:18.388: INFO: Waiting up to 5m0s for pod "pod-17aa68b8-91ec-11ea-a20c-0242ac110018" in namespace "e2e-tests-emptydir-q9xll" to be "success or failure" May 9 11:56:18.398: INFO: Pod "pod-17aa68b8-91ec-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.621715ms May 9 11:56:20.408: INFO: Pod "pod-17aa68b8-91ec-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019514681s May 9 11:56:22.412: INFO: Pod "pod-17aa68b8-91ec-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024125182s STEP: Saw pod success May 9 11:56:22.412: INFO: Pod "pod-17aa68b8-91ec-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:56:22.416: INFO: Trying to get logs from node hunter-worker pod pod-17aa68b8-91ec-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 11:56:22.672: INFO: Waiting for pod pod-17aa68b8-91ec-11ea-a20c-0242ac110018 to disappear May 9 11:56:22.689: INFO: Pod pod-17aa68b8-91ec-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:56:22.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-q9xll" for this suite. May 9 11:56:28.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:56:28.743: INFO: namespace: e2e-tests-emptydir-q9xll, resource: bindings, ignored listing per whitelist May 9 11:56:28.788: INFO: namespace e2e-tests-emptydir-q9xll deletion completed in 6.091430452s • [SLOW TEST:10.514 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:56:28.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 9 11:56:28.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-8mhcf' May 9 11:56:28.968: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 9 11:56:28.968: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 9 11:56:28.977: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 9 11:56:29.003: INFO: scanned /root for discovery docs: May 9 11:56:29.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-8mhcf' May 9 11:56:44.875: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 9 11:56:44.875: INFO: stdout: "Created e2e-test-nginx-rc-20df1b53ab77df746a11c95aa55d9879\nScaling up e2e-test-nginx-rc-20df1b53ab77df746a11c95aa55d9879 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-20df1b53ab77df746a11c95aa55d9879 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-20df1b53ab77df746a11c95aa55d9879 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 9 11:56:44.875: INFO: stdout: "Created e2e-test-nginx-rc-20df1b53ab77df746a11c95aa55d9879\nScaling up e2e-test-nginx-rc-20df1b53ab77df746a11c95aa55d9879 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-20df1b53ab77df746a11c95aa55d9879 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-20df1b53ab77df746a11c95aa55d9879 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 9 11:56:44.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-8mhcf' May 9 11:56:44.963: INFO: stderr: "" May 9 11:56:44.963: INFO: stdout: "e2e-test-nginx-rc-20df1b53ab77df746a11c95aa55d9879-pdqvb " May 9 11:56:44.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-20df1b53ab77df746a11c95aa55d9879-pdqvb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8mhcf' May 9 11:56:45.057: INFO: stderr: "" May 9 11:56:45.057: INFO: stdout: "true" May 9 11:56:45.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-20df1b53ab77df746a11c95aa55d9879-pdqvb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8mhcf' May 9 11:56:45.150: INFO: stderr: "" May 9 11:56:45.150: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 9 11:56:45.150: INFO: e2e-test-nginx-rc-20df1b53ab77df746a11c95aa55d9879-pdqvb is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 9 11:56:45.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-8mhcf' May 9 11:56:45.257: INFO: stderr: "" May 9 11:56:45.257: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:56:45.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8mhcf" for this suite. May 9 11:56:51.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:56:51.340: INFO: namespace: e2e-tests-kubectl-8mhcf, resource: bindings, ignored listing per whitelist May 9 11:56:51.349: INFO: namespace e2e-tests-kubectl-8mhcf deletion completed in 6.088634817s • [SLOW TEST:22.561 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:56:51.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 9 11:56:55.478: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-2b61d0ee-91ec-11ea-a20c-0242ac110018,GenerateName:,Namespace:e2e-tests-events-gv98l,SelfLink:/api/v1/namespaces/e2e-tests-events-gv98l/pods/send-events-2b61d0ee-91ec-11ea-a20c-0242ac110018,UID:2b653df6-91ec-11ea-99e8-0242ac110002,ResourceVersion:9586311,Generation:0,CreationTimestamp:2020-05-09 11:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 436171030,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zhkkj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zhkkj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-zhkkj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027cd6d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027cd6f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 11:56:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 11:56:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 11:56:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 11:56:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.136,StartTime:2020-05-09 11:56:51 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-09 11:56:53 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://cd00a5048e2fd9d774729bc9ca1cea6ce8541daa72471c73058f4f7fed4aa41f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 9 11:56:57.483: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 9 11:56:59.487: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:56:59.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-gv98l" for this suite. May 9 11:57:43.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:57:43.612: INFO: namespace: e2e-tests-events-gv98l, resource: bindings, ignored listing per whitelist May 9 11:57:43.626: INFO: namespace e2e-tests-events-gv98l deletion completed in 44.126500765s • [SLOW TEST:52.276 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:57:43.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-4a8a294d-91ec-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 11:57:43.764: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4a8d9104-91ec-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-h7j6z" to be "success or failure" May 9 11:57:43.768: INFO: Pod "pod-projected-configmaps-4a8d9104-91ec-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.419631ms May 9 11:57:45.854: INFO: Pod "pod-projected-configmaps-4a8d9104-91ec-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09002008s May 9 11:57:47.933: INFO: Pod "pod-projected-configmaps-4a8d9104-91ec-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168550683s STEP: Saw pod success May 9 11:57:47.933: INFO: Pod "pod-projected-configmaps-4a8d9104-91ec-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:57:47.936: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-4a8d9104-91ec-11ea-a20c-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 9 11:57:47.967: INFO: Waiting for pod pod-projected-configmaps-4a8d9104-91ec-11ea-a20c-0242ac110018 to disappear May 9 11:57:47.984: INFO: Pod pod-projected-configmaps-4a8d9104-91ec-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:57:47.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h7j6z" for this suite. May 9 11:57:54.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:57:54.058: INFO: namespace: e2e-tests-projected-h7j6z, resource: bindings, ignored listing per whitelist May 9 11:57:54.091: INFO: namespace e2e-tests-projected-h7j6z deletion completed in 6.10384165s • [SLOW TEST:10.465 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:57:54.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-50cf06f8-91ec-11ea-a20c-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-50cf075d-91ec-11ea-a20c-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-50cf06f8-91ec-11ea-a20c-0242ac110018 STEP: Updating configmap cm-test-opt-upd-50cf075d-91ec-11ea-a20c-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-50cf0784-91ec-11ea-a20c-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:58:04.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gktdp" for this suite. May 9 11:58:34.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:58:34.422: INFO: namespace: e2e-tests-projected-gktdp, resource: bindings, ignored listing per whitelist May 9 11:58:34.482: INFO: namespace e2e-tests-projected-gktdp deletion completed in 30.116971962s • [SLOW TEST:40.391 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:58:34.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 9 11:58:34.580: INFO: Waiting up to 5m0s for pod "pod-68d8a9c2-91ec-11ea-a20c-0242ac110018" in namespace "e2e-tests-emptydir-crlwv" to be "success or failure" May 9 11:58:34.610: INFO: Pod "pod-68d8a9c2-91ec-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.439324ms May 9 11:58:36.614: INFO: Pod "pod-68d8a9c2-91ec-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033777082s May 9 11:58:38.618: INFO: Pod "pod-68d8a9c2-91ec-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037573089s STEP: Saw pod success May 9 11:58:38.618: INFO: Pod "pod-68d8a9c2-91ec-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:58:38.620: INFO: Trying to get logs from node hunter-worker pod pod-68d8a9c2-91ec-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 11:58:38.658: INFO: Waiting for pod pod-68d8a9c2-91ec-11ea-a20c-0242ac110018 to disappear May 9 11:58:38.729: INFO: Pod pod-68d8a9c2-91ec-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:58:38.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-crlwv" for this suite. May 9 11:58:44.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:58:44.959: INFO: namespace: e2e-tests-emptydir-crlwv, resource: bindings, ignored listing per whitelist May 9 11:58:44.974: INFO: namespace e2e-tests-emptydir-crlwv deletion completed in 6.24196397s • [SLOW TEST:10.492 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:58:44.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 9 11:58:45.090: INFO: Waiting up to 5m0s for pod "pod-6f1f38e5-91ec-11ea-a20c-0242ac110018" in namespace "e2e-tests-emptydir-p7vnj" to be "success or failure" May 9 11:58:45.098: INFO: Pod "pod-6f1f38e5-91ec-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.361106ms May 9 11:58:47.245: INFO: Pod "pod-6f1f38e5-91ec-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155089623s May 9 11:58:49.250: INFO: Pod "pod-6f1f38e5-91ec-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.159701282s STEP: Saw pod success May 9 11:58:49.250: INFO: Pod "pod-6f1f38e5-91ec-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 11:58:49.252: INFO: Trying to get logs from node hunter-worker2 pod pod-6f1f38e5-91ec-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 11:58:49.293: INFO: Waiting for pod pod-6f1f38e5-91ec-11ea-a20c-0242ac110018 to disappear May 9 11:58:49.308: INFO: Pod pod-6f1f38e5-91ec-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 11:58:49.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-p7vnj" for this suite. May 9 11:58:55.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 11:58:55.436: INFO: namespace: e2e-tests-emptydir-p7vnj, resource: bindings, ignored listing per whitelist May 9 11:58:55.455: INFO: namespace e2e-tests-emptydir-p7vnj deletion completed in 6.143102458s • [SLOW TEST:10.481 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 11:58:55.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-755cddcb-91ec-11ea-a20c-0242ac110018 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-755cddcb-91ec-11ea-a20c-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:00:20.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wr422" for this suite. May 9 12:00:42.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:00:42.122: INFO: namespace: e2e-tests-configmap-wr422, resource: bindings, ignored listing per whitelist May 9 12:00:42.165: INFO: namespace e2e-tests-configmap-wr422 deletion completed in 22.135265202s • [SLOW TEST:106.709 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:00:42.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 9 12:00:47.301: INFO: Successfully updated pod "labelsupdateb51adb1c-91ec-11ea-a20c-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:00:49.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bqhnc" for this suite. May 9 12:01:11.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:01:11.490: INFO: namespace: e2e-tests-projected-bqhnc, resource: bindings, ignored listing per whitelist May 9 12:01:11.522: INFO: namespace e2e-tests-projected-bqhnc deletion completed in 22.101709563s • [SLOW TEST:29.357 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:01:11.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 9 12:01:19.775: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 12:01:19.786: INFO: Pod pod-with-poststart-exec-hook still exists May 9 12:01:21.786: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 12:01:21.791: INFO: Pod pod-with-poststart-exec-hook still exists May 9 12:01:23.786: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 12:01:23.790: INFO: Pod pod-with-poststart-exec-hook still exists May 9 12:01:25.786: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 12:01:25.791: INFO: Pod pod-with-poststart-exec-hook still exists May 9 12:01:27.786: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 12:01:27.790: INFO: Pod pod-with-poststart-exec-hook still exists May 9 12:01:29.786: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 12:01:29.791: INFO: Pod pod-with-poststart-exec-hook still exists May 9 12:01:31.786: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 12:01:31.790: INFO: Pod pod-with-poststart-exec-hook still exists May 9 12:01:33.786: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 12:01:33.790: INFO: Pod pod-with-poststart-exec-hook still exists May 9 12:01:35.786: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 12:01:35.791: INFO: Pod pod-with-poststart-exec-hook still exists May 9 12:01:37.786: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 12:01:37.930: INFO: Pod pod-with-poststart-exec-hook still exists May 9 12:01:39.786: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 12:01:39.793: INFO: Pod pod-with-poststart-exec-hook still exists May 9 12:01:41.786: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 9 12:01:41.798: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:01:41.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-qh2xd" for this suite. May 9 12:02:03.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:02:03.974: INFO: namespace: e2e-tests-container-lifecycle-hook-qh2xd, resource: bindings, ignored listing per whitelist May 9 12:02:03.974: INFO: namespace e2e-tests-container-lifecycle-hook-qh2xd deletion completed in 22.171126127s • [SLOW TEST:52.452 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:02:03.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 12:02:04.100: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:02:08.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-mmkdq" for this suite. May 9 12:02:46.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:02:46.281: INFO: namespace: e2e-tests-pods-mmkdq, resource: bindings, ignored listing per whitelist May 9 12:02:46.324: INFO: namespace e2e-tests-pods-mmkdq deletion completed in 38.085298654s • [SLOW TEST:42.350 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:02:46.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-ff07b62b-91ec-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 12:02:46.544: INFO: Waiting up to 5m0s for pod "pod-configmaps-ff0a036c-91ec-11ea-a20c-0242ac110018" in namespace "e2e-tests-configmap-n4plw" to be "success or failure" May 9 12:02:46.548: INFO: Pod "pod-configmaps-ff0a036c-91ec-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.246389ms May 9 12:02:48.552: INFO: Pod "pod-configmaps-ff0a036c-91ec-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007886104s May 9 12:02:50.557: INFO: Pod "pod-configmaps-ff0a036c-91ec-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013007503s STEP: Saw pod success May 9 12:02:50.557: INFO: Pod "pod-configmaps-ff0a036c-91ec-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:02:50.561: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-ff0a036c-91ec-11ea-a20c-0242ac110018 container configmap-volume-test: STEP: delete the pod May 9 12:02:50.612: INFO: Waiting for pod pod-configmaps-ff0a036c-91ec-11ea-a20c-0242ac110018 to disappear May 9 12:02:50.633: INFO: Pod pod-configmaps-ff0a036c-91ec-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:02:50.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-n4plw" for this suite. May 9 12:02:56.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:02:56.695: INFO: namespace: e2e-tests-configmap-n4plw, resource: bindings, ignored listing per whitelist May 9 12:02:56.753: INFO: namespace e2e-tests-configmap-n4plw deletion completed in 6.091676415s • [SLOW TEST:10.428 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:02:56.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 12:02:56.902: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0534fd4b-91ed-11ea-a20c-0242ac110018" in namespace "e2e-tests-downward-api-rzzvp" to be "success or failure" May 9 12:02:56.936: INFO: Pod "downwardapi-volume-0534fd4b-91ed-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 33.90871ms May 9 12:02:58.940: INFO: Pod "downwardapi-volume-0534fd4b-91ed-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037849523s May 9 12:03:00.944: INFO: Pod "downwardapi-volume-0534fd4b-91ed-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041753363s STEP: Saw pod success May 9 12:03:00.944: INFO: Pod "downwardapi-volume-0534fd4b-91ed-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:03:00.947: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0534fd4b-91ed-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 12:03:00.967: INFO: Waiting for pod downwardapi-volume-0534fd4b-91ed-11ea-a20c-0242ac110018 to disappear May 9 12:03:00.997: INFO: Pod downwardapi-volume-0534fd4b-91ed-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:03:00.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rzzvp" for this suite. May 9 12:03:07.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:03:07.102: INFO: namespace: e2e-tests-downward-api-rzzvp, resource: bindings, ignored listing per whitelist May 9 12:03:07.151: INFO: namespace e2e-tests-downward-api-rzzvp deletion completed in 6.151905408s • [SLOW TEST:10.398 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:03:07.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:03:11.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-jn9rv" for this suite. May 9 12:03:53.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:03:53.462: INFO: namespace: e2e-tests-kubelet-test-jn9rv, resource: bindings, ignored listing per whitelist May 9 12:03:53.495: INFO: namespace e2e-tests-kubelet-test-jn9rv deletion completed in 42.191674349s • [SLOW TEST:46.344 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:03:53.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 12:03:53.581: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.884684ms) May 9 12:03:53.591: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 9.502073ms) May 9 12:03:53.595: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.66266ms) May 9 12:03:53.601: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.263682ms) May 9 12:03:53.609: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 8.57728ms) May 9 12:03:53.633: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 23.340936ms) May 9 12:03:53.644: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 11.363196ms) May 9 12:03:53.647: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.90604ms) May 9 12:03:53.650: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.89078ms) May 9 12:03:53.652: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.386248ms) May 9 12:03:53.655: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.650797ms) May 9 12:03:53.658: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.05478ms) May 9 12:03:53.661: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.856028ms) May 9 12:03:53.664: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.687007ms) May 9 12:03:53.667: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.179281ms) May 9 12:03:53.671: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.708711ms) May 9 12:03:53.674: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.133915ms) May 9 12:03:53.677: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.390766ms) May 9 12:03:53.680: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.910981ms) May 9 12:03:53.689: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 8.518727ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:03:53.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-8vv46" for this suite. May 9 12:03:59.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:03:59.862: INFO: namespace: e2e-tests-proxy-8vv46, resource: bindings, ignored listing per whitelist May 9 12:03:59.889: INFO: namespace e2e-tests-proxy-8vv46 deletion completed in 6.196742012s • [SLOW TEST:6.393 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:03:59.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 9 12:04:00.156: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:04:13.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-w98hq" for this suite. May 9 12:04:35.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:04:35.820: INFO: namespace: e2e-tests-init-container-w98hq, resource: bindings, ignored listing per whitelist May 9 12:04:35.907: INFO: namespace e2e-tests-init-container-w98hq deletion completed in 22.168934022s • [SLOW TEST:36.018 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:04:35.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-404c65b5-91ed-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 12:04:36.038: INFO: Waiting up to 5m0s for pod "pod-configmaps-404d5c4d-91ed-11ea-a20c-0242ac110018" in namespace "e2e-tests-configmap-grs5q" to be "success or failure" May 9 12:04:36.042: INFO: Pod "pod-configmaps-404d5c4d-91ed-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125575ms May 9 12:04:38.154: INFO: Pod "pod-configmaps-404d5c4d-91ed-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116330977s May 9 12:04:40.166: INFO: Pod "pod-configmaps-404d5c4d-91ed-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1275986s STEP: Saw pod success May 9 12:04:40.166: INFO: Pod "pod-configmaps-404d5c4d-91ed-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:04:40.168: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-404d5c4d-91ed-11ea-a20c-0242ac110018 container configmap-volume-test: STEP: delete the pod May 9 12:04:40.194: INFO: Waiting for pod pod-configmaps-404d5c4d-91ed-11ea-a20c-0242ac110018 to disappear May 9 12:04:40.209: INFO: Pod pod-configmaps-404d5c4d-91ed-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:04:40.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-grs5q" for this suite. May 9 12:04:46.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:04:46.289: INFO: namespace: e2e-tests-configmap-grs5q, resource: bindings, ignored listing per whitelist May 9 12:04:46.307: INFO: namespace e2e-tests-configmap-grs5q deletion completed in 6.095269788s • [SLOW TEST:10.400 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:04:46.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:05:46.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-w5f8l" for this suite. May 9 12:06:10.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:06:10.686: INFO: namespace: e2e-tests-container-probe-w5f8l, resource: bindings, ignored listing per whitelist May 9 12:06:10.721: INFO: namespace e2e-tests-container-probe-w5f8l deletion completed in 24.202972115s • [SLOW TEST:84.414 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:06:10.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 9 12:06:10.853: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ttwtf,SelfLink:/api/v1/namespaces/e2e-tests-watch-ttwtf/configmaps/e2e-watch-test-configmap-a,UID:78d13e63-91ed-11ea-99e8-0242ac110002,ResourceVersion:9587809,Generation:0,CreationTimestamp:2020-05-09 12:06:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 9 12:06:10.853: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ttwtf,SelfLink:/api/v1/namespaces/e2e-tests-watch-ttwtf/configmaps/e2e-watch-test-configmap-a,UID:78d13e63-91ed-11ea-99e8-0242ac110002,ResourceVersion:9587809,Generation:0,CreationTimestamp:2020-05-09 12:06:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 9 12:06:20.861: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ttwtf,SelfLink:/api/v1/namespaces/e2e-tests-watch-ttwtf/configmaps/e2e-watch-test-configmap-a,UID:78d13e63-91ed-11ea-99e8-0242ac110002,ResourceVersion:9587829,Generation:0,CreationTimestamp:2020-05-09 12:06:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 9 12:06:20.861: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ttwtf,SelfLink:/api/v1/namespaces/e2e-tests-watch-ttwtf/configmaps/e2e-watch-test-configmap-a,UID:78d13e63-91ed-11ea-99e8-0242ac110002,ResourceVersion:9587829,Generation:0,CreationTimestamp:2020-05-09 12:06:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 9 12:06:30.890: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ttwtf,SelfLink:/api/v1/namespaces/e2e-tests-watch-ttwtf/configmaps/e2e-watch-test-configmap-a,UID:78d13e63-91ed-11ea-99e8-0242ac110002,ResourceVersion:9587849,Generation:0,CreationTimestamp:2020-05-09 12:06:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 9 12:06:30.890: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ttwtf,SelfLink:/api/v1/namespaces/e2e-tests-watch-ttwtf/configmaps/e2e-watch-test-configmap-a,UID:78d13e63-91ed-11ea-99e8-0242ac110002,ResourceVersion:9587849,Generation:0,CreationTimestamp:2020-05-09 12:06:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 9 12:06:40.900: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ttwtf,SelfLink:/api/v1/namespaces/e2e-tests-watch-ttwtf/configmaps/e2e-watch-test-configmap-a,UID:78d13e63-91ed-11ea-99e8-0242ac110002,ResourceVersion:9587869,Generation:0,CreationTimestamp:2020-05-09 12:06:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 9 12:06:40.901: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ttwtf,SelfLink:/api/v1/namespaces/e2e-tests-watch-ttwtf/configmaps/e2e-watch-test-configmap-a,UID:78d13e63-91ed-11ea-99e8-0242ac110002,ResourceVersion:9587869,Generation:0,CreationTimestamp:2020-05-09 12:06:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 9 12:06:50.918: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-ttwtf,SelfLink:/api/v1/namespaces/e2e-tests-watch-ttwtf/configmaps/e2e-watch-test-configmap-b,UID:90b131d7-91ed-11ea-99e8-0242ac110002,ResourceVersion:9587889,Generation:0,CreationTimestamp:2020-05-09 12:06:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 9 12:06:50.918: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-ttwtf,SelfLink:/api/v1/namespaces/e2e-tests-watch-ttwtf/configmaps/e2e-watch-test-configmap-b,UID:90b131d7-91ed-11ea-99e8-0242ac110002,ResourceVersion:9587889,Generation:0,CreationTimestamp:2020-05-09 12:06:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 9 12:07:00.925: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-ttwtf,SelfLink:/api/v1/namespaces/e2e-tests-watch-ttwtf/configmaps/e2e-watch-test-configmap-b,UID:90b131d7-91ed-11ea-99e8-0242ac110002,ResourceVersion:9587910,Generation:0,CreationTimestamp:2020-05-09 12:06:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 9 12:07:00.925: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-ttwtf,SelfLink:/api/v1/namespaces/e2e-tests-watch-ttwtf/configmaps/e2e-watch-test-configmap-b,UID:90b131d7-91ed-11ea-99e8-0242ac110002,ResourceVersion:9587910,Generation:0,CreationTimestamp:2020-05-09 12:06:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:07:10.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-ttwtf" for this suite. May 9 12:07:16.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:07:17.086: INFO: namespace: e2e-tests-watch-ttwtf, resource: bindings, ignored listing per whitelist May 9 12:07:17.100: INFO: namespace e2e-tests-watch-ttwtf deletion completed in 6.169046805s • [SLOW TEST:66.378 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:07:17.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-a05c84c7-91ed-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume secrets May 9 12:07:17.212: INFO: Waiting up to 5m0s for pod "pod-secrets-a05e557e-91ed-11ea-a20c-0242ac110018" in namespace "e2e-tests-secrets-p8nt7" to be "success or failure" May 9 12:07:17.219: INFO: Pod "pod-secrets-a05e557e-91ed-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.663494ms May 9 12:07:19.222: INFO: Pod "pod-secrets-a05e557e-91ed-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01032598s May 9 12:07:21.226: INFO: Pod "pod-secrets-a05e557e-91ed-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01431586s STEP: Saw pod success May 9 12:07:21.226: INFO: Pod "pod-secrets-a05e557e-91ed-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:07:21.229: INFO: Trying to get logs from node hunter-worker pod pod-secrets-a05e557e-91ed-11ea-a20c-0242ac110018 container secret-env-test: STEP: delete the pod May 9 12:07:21.253: INFO: Waiting for pod pod-secrets-a05e557e-91ed-11ea-a20c-0242ac110018 to disappear May 9 12:07:21.257: INFO: Pod pod-secrets-a05e557e-91ed-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:07:21.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-p8nt7" for this suite. May 9 12:07:27.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:07:27.282: INFO: namespace: e2e-tests-secrets-p8nt7, resource: bindings, ignored listing per whitelist May 9 12:07:27.353: INFO: namespace e2e-tests-secrets-p8nt7 deletion completed in 6.092428332s • [SLOW TEST:10.253 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:07:27.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:07:31.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-q2cr7" for this suite. May 9 12:07:37.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:07:37.726: INFO: namespace: e2e-tests-emptydir-wrapper-q2cr7, resource: bindings, ignored listing per whitelist May 9 12:07:37.730: INFO: namespace e2e-tests-emptydir-wrapper-q2cr7 deletion completed in 6.124742511s • [SLOW TEST:10.377 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:07:37.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 9 12:07:37.852: INFO: Waiting up to 5m0s for pod "downward-api-acac329c-91ed-11ea-a20c-0242ac110018" in namespace "e2e-tests-downward-api-q5rqz" to be "success or failure" May 9 12:07:37.881: INFO: Pod "downward-api-acac329c-91ed-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 28.742157ms May 9 12:07:39.885: INFO: Pod "downward-api-acac329c-91ed-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032681996s May 9 12:07:41.934: INFO: Pod "downward-api-acac329c-91ed-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082078768s STEP: Saw pod success May 9 12:07:41.935: INFO: Pod "downward-api-acac329c-91ed-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:07:41.938: INFO: Trying to get logs from node hunter-worker pod downward-api-acac329c-91ed-11ea-a20c-0242ac110018 container dapi-container: STEP: delete the pod May 9 12:07:41.982: INFO: Waiting for pod downward-api-acac329c-91ed-11ea-a20c-0242ac110018 to disappear May 9 12:07:41.994: INFO: Pod downward-api-acac329c-91ed-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:07:41.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-q5rqz" for this suite. May 9 12:07:48.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:07:48.043: INFO: namespace: e2e-tests-downward-api-q5rqz, resource: bindings, ignored listing per whitelist May 9 12:07:48.089: INFO: namespace e2e-tests-downward-api-q5rqz deletion completed in 6.091332699s • [SLOW TEST:10.359 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:07:48.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 9 12:07:48.291: INFO: Waiting up to 5m0s for pod "var-expansion-b2d946bf-91ed-11ea-a20c-0242ac110018" in namespace "e2e-tests-var-expansion-24l8p" to be "success or failure" May 9 12:07:48.300: INFO: Pod "var-expansion-b2d946bf-91ed-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.570848ms May 9 12:07:50.304: INFO: Pod "var-expansion-b2d946bf-91ed-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013037693s May 9 12:07:52.308: INFO: Pod "var-expansion-b2d946bf-91ed-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017227162s STEP: Saw pod success May 9 12:07:52.308: INFO: Pod "var-expansion-b2d946bf-91ed-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:07:52.311: INFO: Trying to get logs from node hunter-worker pod var-expansion-b2d946bf-91ed-11ea-a20c-0242ac110018 container dapi-container: STEP: delete the pod May 9 12:07:52.385: INFO: Waiting for pod var-expansion-b2d946bf-91ed-11ea-a20c-0242ac110018 to disappear May 9 12:07:52.394: INFO: Pod var-expansion-b2d946bf-91ed-11ea-a20c-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:07:52.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-24l8p" for this suite. May 9 12:07:58.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:07:58.445: INFO: namespace: e2e-tests-var-expansion-24l8p, resource: bindings, ignored listing per whitelist May 9 12:07:58.490: INFO: namespace e2e-tests-var-expansion-24l8p deletion completed in 6.091476069s • [SLOW TEST:10.401 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:07:58.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 9 12:07:58.619: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 9 12:07:58.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7bnxh' May 9 12:08:01.547: INFO: stderr: "" May 9 12:08:01.547: INFO: stdout: "service/redis-slave created\n" May 9 12:08:01.548: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 9 12:08:01.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7bnxh' May 9 12:08:01.846: INFO: stderr: "" May 9 12:08:01.846: INFO: stdout: "service/redis-master created\n" May 9 12:08:01.846: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 9 12:08:01.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7bnxh' May 9 12:08:02.166: INFO: stderr: "" May 9 12:08:02.166: INFO: stdout: "service/frontend created\n" May 9 12:08:02.167: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 9 12:08:02.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7bnxh' May 9 12:08:02.430: INFO: stderr: "" May 9 12:08:02.430: INFO: stdout: "deployment.extensions/frontend created\n" May 9 12:08:02.430: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 9 12:08:02.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7bnxh' May 9 12:08:02.727: INFO: stderr: "" May 9 12:08:02.727: INFO: stdout: "deployment.extensions/redis-master created\n" May 9 12:08:02.727: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 9 12:08:02.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7bnxh' May 9 12:08:02.983: INFO: stderr: "" May 9 12:08:02.983: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 9 12:08:02.983: INFO: Waiting for all frontend pods to be Running. May 9 12:08:13.033: INFO: Waiting for frontend to serve content. May 9 12:08:13.052: INFO: Trying to add a new entry to the guestbook. May 9 12:08:13.068: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 9 12:08:13.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7bnxh' May 9 12:08:13.255: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 12:08:13.255: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 9 12:08:13.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7bnxh' May 9 12:08:13.385: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 12:08:13.385: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 9 12:08:13.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7bnxh' May 9 12:08:13.508: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 12:08:13.508: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 9 12:08:13.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7bnxh' May 9 12:08:13.603: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 12:08:13.603: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 9 12:08:13.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7bnxh' May 9 12:08:13.763: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 12:08:13.763: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 9 12:08:13.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7bnxh' May 9 12:08:14.244: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 9 12:08:14.244: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:08:14.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7bnxh" for this suite. May 9 12:08:52.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:08:52.810: INFO: namespace: e2e-tests-kubectl-7bnxh, resource: bindings, ignored listing per whitelist May 9 12:08:52.815: INFO: namespace e2e-tests-kubectl-7bnxh deletion completed in 38.566546379s • [SLOW TEST:54.325 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:08:52.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-jtqw6 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-jtqw6 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-jtqw6 May 9 12:08:52.948: INFO: Found 0 stateful pods, waiting for 1 May 9 12:09:02.952: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 9 12:09:02.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jtqw6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 9 12:09:03.236: INFO: stderr: "I0509 12:09:03.093611 2771 log.go:172] (0xc000138790) (0xc0007c74a0) Create stream\nI0509 12:09:03.093672 2771 log.go:172] (0xc000138790) (0xc0007c74a0) Stream added, broadcasting: 1\nI0509 12:09:03.096096 2771 log.go:172] (0xc000138790) Reply frame received for 1\nI0509 12:09:03.096129 2771 log.go:172] (0xc000138790) (0xc0007c7540) Create stream\nI0509 12:09:03.096137 2771 log.go:172] (0xc000138790) (0xc0007c7540) Stream added, broadcasting: 3\nI0509 12:09:03.097753 2771 log.go:172] (0xc000138790) Reply frame received for 3\nI0509 12:09:03.097784 2771 log.go:172] (0xc000138790) (0xc000272000) Create stream\nI0509 12:09:03.097793 2771 log.go:172] (0xc000138790) (0xc000272000) Stream added, broadcasting: 5\nI0509 12:09:03.098815 2771 log.go:172] (0xc000138790) Reply frame received for 5\nI0509 12:09:03.230308 2771 log.go:172] (0xc000138790) Data frame received for 3\nI0509 12:09:03.230332 2771 log.go:172] (0xc0007c7540) (3) Data frame handling\nI0509 12:09:03.230354 2771 log.go:172] (0xc0007c7540) (3) Data frame sent\nI0509 12:09:03.230362 2771 log.go:172] (0xc000138790) Data frame received for 3\nI0509 12:09:03.230366 2771 log.go:172] (0xc0007c7540) (3) Data frame handling\nI0509 12:09:03.230496 2771 log.go:172] (0xc000138790) Data frame received for 5\nI0509 12:09:03.230509 2771 log.go:172] (0xc000272000) (5) Data frame handling\nI0509 12:09:03.232676 2771 log.go:172] (0xc000138790) Data frame received for 1\nI0509 12:09:03.232696 2771 log.go:172] (0xc0007c74a0) (1) Data frame handling\nI0509 12:09:03.232724 2771 log.go:172] (0xc0007c74a0) (1) Data frame sent\nI0509 12:09:03.232743 2771 log.go:172] (0xc000138790) (0xc0007c74a0) Stream removed, broadcasting: 1\nI0509 12:09:03.232798 2771 log.go:172] (0xc000138790) Go away received\nI0509 12:09:03.232975 2771 log.go:172] (0xc000138790) (0xc0007c74a0) Stream removed, broadcasting: 1\nI0509 12:09:03.232993 2771 log.go:172] (0xc000138790) (0xc0007c7540) Stream removed, broadcasting: 3\nI0509 12:09:03.233006 2771 log.go:172] (0xc000138790) (0xc000272000) Stream removed, broadcasting: 5\n" May 9 12:09:03.236: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 9 12:09:03.236: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 9 12:09:03.240: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 9 12:09:13.244: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 9 12:09:13.244: INFO: Waiting for statefulset status.replicas updated to 0 May 9 12:09:13.267: INFO: POD NODE PHASE GRACE CONDITIONS May 9 12:09:13.267: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:52 +0000 UTC }] May 9 12:09:13.267: INFO: May 9 12:09:13.267: INFO: StatefulSet ss has not reached scale 3, at 1 May 9 12:09:14.273: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986496504s May 9 12:09:15.475: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.980817131s May 9 12:09:16.571: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.778577612s May 9 12:09:17.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.682258494s May 9 12:09:18.580: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.678469547s May 9 12:09:19.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.673257062s May 9 12:09:20.591: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.667798366s May 9 12:09:21.597: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.662315782s May 9 12:09:22.602: INFO: Verifying statefulset ss doesn't scale past 3 for another 656.613102ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-jtqw6 May 9 12:09:23.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jtqw6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 9 12:09:23.868: INFO: stderr: "I0509 12:09:23.749854 2793 log.go:172] (0xc000138840) (0xc0006d3540) Create stream\nI0509 12:09:23.749948 2793 log.go:172] (0xc000138840) (0xc0006d3540) Stream added, broadcasting: 1\nI0509 12:09:23.761637 2793 log.go:172] (0xc000138840) Reply frame received for 1\nI0509 12:09:23.761773 2793 log.go:172] (0xc000138840) (0xc0006c6000) Create stream\nI0509 12:09:23.761836 2793 log.go:172] (0xc000138840) (0xc0006c6000) Stream added, broadcasting: 3\nI0509 12:09:23.764153 2793 log.go:172] (0xc000138840) Reply frame received for 3\nI0509 12:09:23.764215 2793 log.go:172] (0xc000138840) (0xc0006dc000) Create stream\nI0509 12:09:23.764254 2793 log.go:172] (0xc000138840) (0xc0006dc000) Stream added, broadcasting: 5\nI0509 12:09:23.766067 2793 log.go:172] (0xc000138840) Reply frame received for 5\nI0509 12:09:23.863466 2793 log.go:172] (0xc000138840) Data frame received for 5\nI0509 12:09:23.863507 2793 log.go:172] (0xc0006dc000) (5) Data frame handling\nI0509 12:09:23.863570 2793 log.go:172] (0xc000138840) Data frame received for 3\nI0509 12:09:23.863606 2793 log.go:172] (0xc0006c6000) (3) Data frame handling\nI0509 12:09:23.863628 2793 log.go:172] (0xc0006c6000) (3) Data frame sent\nI0509 12:09:23.863641 2793 log.go:172] (0xc000138840) Data frame received for 3\nI0509 12:09:23.863651 2793 log.go:172] (0xc0006c6000) (3) Data frame handling\nI0509 12:09:23.864946 2793 log.go:172] (0xc000138840) Data frame received for 1\nI0509 12:09:23.864978 2793 log.go:172] (0xc0006d3540) (1) Data frame handling\nI0509 12:09:23.864996 2793 log.go:172] (0xc0006d3540) (1) Data frame sent\nI0509 12:09:23.865012 2793 log.go:172] (0xc000138840) (0xc0006d3540) Stream removed, broadcasting: 1\nI0509 12:09:23.865031 2793 log.go:172] (0xc000138840) Go away received\nI0509 12:09:23.865459 2793 log.go:172] (0xc000138840) (0xc0006d3540) Stream removed, broadcasting: 1\nI0509 12:09:23.865486 2793 log.go:172] (0xc000138840) (0xc0006c6000) Stream removed, broadcasting: 3\nI0509 12:09:23.865506 2793 log.go:172] (0xc000138840) (0xc0006dc000) Stream removed, broadcasting: 5\n" May 9 12:09:23.868: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 9 12:09:23.868: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 9 12:09:23.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jtqw6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 9 12:09:24.071: INFO: stderr: "I0509 12:09:23.990520 2815 log.go:172] (0xc000138580) (0xc0006f05a0) Create stream\nI0509 12:09:23.990580 2815 log.go:172] (0xc000138580) (0xc0006f05a0) Stream added, broadcasting: 1\nI0509 12:09:23.992913 2815 log.go:172] (0xc000138580) Reply frame received for 1\nI0509 12:09:23.992962 2815 log.go:172] (0xc000138580) (0xc0006b0c80) Create stream\nI0509 12:09:23.992977 2815 log.go:172] (0xc000138580) (0xc0006b0c80) Stream added, broadcasting: 3\nI0509 12:09:23.994261 2815 log.go:172] (0xc000138580) Reply frame received for 3\nI0509 12:09:23.994304 2815 log.go:172] (0xc000138580) (0xc0006a4000) Create stream\nI0509 12:09:23.994318 2815 log.go:172] (0xc000138580) (0xc0006a4000) Stream added, broadcasting: 5\nI0509 12:09:23.995216 2815 log.go:172] (0xc000138580) Reply frame received for 5\nI0509 12:09:24.065652 2815 log.go:172] (0xc000138580) Data frame received for 5\nI0509 12:09:24.065688 2815 log.go:172] (0xc0006a4000) (5) Data frame handling\nI0509 12:09:24.065704 2815 log.go:172] (0xc0006a4000) (5) Data frame sent\nI0509 12:09:24.065714 2815 log.go:172] (0xc000138580) Data frame received for 5\nI0509 12:09:24.065722 2815 log.go:172] (0xc0006a4000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0509 12:09:24.065749 2815 log.go:172] (0xc000138580) Data frame received for 3\nI0509 12:09:24.065759 2815 log.go:172] (0xc0006b0c80) (3) Data frame handling\nI0509 12:09:24.065772 2815 log.go:172] (0xc0006b0c80) (3) Data frame sent\nI0509 12:09:24.065779 2815 log.go:172] (0xc000138580) Data frame received for 3\nI0509 12:09:24.065784 2815 log.go:172] (0xc0006b0c80) (3) Data frame handling\nI0509 12:09:24.067077 2815 log.go:172] (0xc000138580) Data frame received for 1\nI0509 12:09:24.067100 2815 log.go:172] (0xc0006f05a0) (1) Data frame handling\nI0509 12:09:24.067129 2815 log.go:172] (0xc0006f05a0) (1) Data frame sent\nI0509 12:09:24.067165 2815 log.go:172] (0xc000138580) (0xc0006f05a0) Stream removed, broadcasting: 1\nI0509 12:09:24.067341 2815 log.go:172] (0xc000138580) Go away received\nI0509 12:09:24.067398 2815 log.go:172] (0xc000138580) (0xc0006f05a0) Stream removed, broadcasting: 1\nI0509 12:09:24.067443 2815 log.go:172] (0xc000138580) (0xc0006b0c80) Stream removed, broadcasting: 3\nI0509 12:09:24.067457 2815 log.go:172] (0xc000138580) (0xc0006a4000) Stream removed, broadcasting: 5\n" May 9 12:09:24.071: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 9 12:09:24.071: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 9 12:09:24.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jtqw6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 9 12:09:24.271: INFO: stderr: "I0509 12:09:24.203180 2838 log.go:172] (0xc000138630) (0xc000724640) Create stream\nI0509 12:09:24.203254 2838 log.go:172] (0xc000138630) (0xc000724640) Stream added, broadcasting: 1\nI0509 12:09:24.205911 2838 log.go:172] (0xc000138630) Reply frame received for 1\nI0509 12:09:24.205951 2838 log.go:172] (0xc000138630) (0xc000582d20) Create stream\nI0509 12:09:24.205960 2838 log.go:172] (0xc000138630) (0xc000582d20) Stream added, broadcasting: 3\nI0509 12:09:24.207273 2838 log.go:172] (0xc000138630) Reply frame received for 3\nI0509 12:09:24.207340 2838 log.go:172] (0xc000138630) (0xc0002a4000) Create stream\nI0509 12:09:24.207437 2838 log.go:172] (0xc000138630) (0xc0002a4000) Stream added, broadcasting: 5\nI0509 12:09:24.208499 2838 log.go:172] (0xc000138630) Reply frame received for 5\nI0509 12:09:24.265740 2838 log.go:172] (0xc000138630) Data frame received for 3\nI0509 12:09:24.265774 2838 log.go:172] (0xc000582d20) (3) Data frame handling\nI0509 12:09:24.265795 2838 log.go:172] (0xc000582d20) (3) Data frame sent\nI0509 12:09:24.265805 2838 log.go:172] (0xc000138630) Data frame received for 3\nI0509 12:09:24.265818 2838 log.go:172] (0xc000582d20) (3) Data frame handling\nI0509 12:09:24.265891 2838 log.go:172] (0xc000138630) Data frame received for 5\nI0509 12:09:24.265909 2838 log.go:172] (0xc0002a4000) (5) Data frame handling\nI0509 12:09:24.265919 2838 log.go:172] (0xc0002a4000) (5) Data frame sent\nI0509 12:09:24.265928 2838 log.go:172] (0xc000138630) Data frame received for 5\nI0509 12:09:24.265935 2838 log.go:172] (0xc0002a4000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0509 12:09:24.267533 2838 log.go:172] (0xc000138630) Data frame received for 1\nI0509 12:09:24.267561 2838 log.go:172] (0xc000724640) (1) Data frame handling\nI0509 12:09:24.267577 2838 log.go:172] (0xc000724640) (1) Data frame sent\nI0509 12:09:24.267593 2838 log.go:172] (0xc000138630) (0xc000724640) Stream removed, broadcasting: 1\nI0509 12:09:24.267721 2838 log.go:172] (0xc000138630) Go away received\nI0509 12:09:24.267784 2838 log.go:172] (0xc000138630) (0xc000724640) Stream removed, broadcasting: 1\nI0509 12:09:24.267809 2838 log.go:172] (0xc000138630) (0xc000582d20) Stream removed, broadcasting: 3\nI0509 12:09:24.267825 2838 log.go:172] (0xc000138630) (0xc0002a4000) Stream removed, broadcasting: 5\n" May 9 12:09:24.271: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 9 12:09:24.271: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 9 12:09:24.276: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 9 12:09:34.279: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 9 12:09:34.279: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 9 12:09:34.279: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 9 12:09:34.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jtqw6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 9 12:09:34.518: INFO: stderr: "I0509 12:09:34.424892 2859 log.go:172] (0xc0008602c0) (0xc00070e640) Create stream\nI0509 12:09:34.424947 2859 log.go:172] (0xc0008602c0) (0xc00070e640) Stream added, broadcasting: 1\nI0509 12:09:34.427963 2859 log.go:172] (0xc0008602c0) Reply frame received for 1\nI0509 12:09:34.428014 2859 log.go:172] (0xc0008602c0) (0xc0005b2c80) Create stream\nI0509 12:09:34.428032 2859 log.go:172] (0xc0008602c0) (0xc0005b2c80) Stream added, broadcasting: 3\nI0509 12:09:34.429075 2859 log.go:172] (0xc0008602c0) Reply frame received for 3\nI0509 12:09:34.429326 2859 log.go:172] (0xc0008602c0) (0xc00020c000) Create stream\nI0509 12:09:34.429364 2859 log.go:172] (0xc0008602c0) (0xc00020c000) Stream added, broadcasting: 5\nI0509 12:09:34.430208 2859 log.go:172] (0xc0008602c0) Reply frame received for 5\nI0509 12:09:34.513506 2859 log.go:172] (0xc0008602c0) Data frame received for 5\nI0509 12:09:34.513673 2859 log.go:172] (0xc00020c000) (5) Data frame handling\nI0509 12:09:34.513726 2859 log.go:172] (0xc0008602c0) Data frame received for 3\nI0509 12:09:34.513742 2859 log.go:172] (0xc0005b2c80) (3) Data frame handling\nI0509 12:09:34.513769 2859 log.go:172] (0xc0005b2c80) (3) Data frame sent\nI0509 12:09:34.513788 2859 log.go:172] (0xc0008602c0) Data frame received for 3\nI0509 12:09:34.513798 2859 log.go:172] (0xc0005b2c80) (3) Data frame handling\nI0509 12:09:34.515079 2859 log.go:172] (0xc0008602c0) Data frame received for 1\nI0509 12:09:34.515103 2859 log.go:172] (0xc00070e640) (1) Data frame handling\nI0509 12:09:34.515132 2859 log.go:172] (0xc00070e640) (1) Data frame sent\nI0509 12:09:34.515175 2859 log.go:172] (0xc0008602c0) (0xc00070e640) Stream removed, broadcasting: 1\nI0509 12:09:34.515197 2859 log.go:172] (0xc0008602c0) Go away received\nI0509 12:09:34.515410 2859 log.go:172] (0xc0008602c0) (0xc00070e640) Stream removed, broadcasting: 1\nI0509 12:09:34.515431 2859 log.go:172] (0xc0008602c0) (0xc0005b2c80) Stream removed, broadcasting: 3\nI0509 12:09:34.515440 2859 log.go:172] (0xc0008602c0) (0xc00020c000) Stream removed, broadcasting: 5\n" May 9 12:09:34.518: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 9 12:09:34.518: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 9 12:09:34.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jtqw6 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 9 12:09:34.809: INFO: stderr: "I0509 12:09:34.662979 2882 log.go:172] (0xc00015c840) (0xc0005e32c0) Create stream\nI0509 12:09:34.663034 2882 log.go:172] (0xc00015c840) (0xc0005e32c0) Stream added, broadcasting: 1\nI0509 12:09:34.665639 2882 log.go:172] (0xc00015c840) Reply frame received for 1\nI0509 12:09:34.665678 2882 log.go:172] (0xc00015c840) (0xc000770000) Create stream\nI0509 12:09:34.665693 2882 log.go:172] (0xc00015c840) (0xc000770000) Stream added, broadcasting: 3\nI0509 12:09:34.666612 2882 log.go:172] (0xc00015c840) Reply frame received for 3\nI0509 12:09:34.666664 2882 log.go:172] (0xc00015c840) (0xc000672000) Create stream\nI0509 12:09:34.666680 2882 log.go:172] (0xc00015c840) (0xc000672000) Stream added, broadcasting: 5\nI0509 12:09:34.667571 2882 log.go:172] (0xc00015c840) Reply frame received for 5\nI0509 12:09:34.803815 2882 log.go:172] (0xc00015c840) Data frame received for 5\nI0509 12:09:34.803845 2882 log.go:172] (0xc000672000) (5) Data frame handling\nI0509 12:09:34.803881 2882 log.go:172] (0xc00015c840) Data frame received for 3\nI0509 12:09:34.803890 2882 log.go:172] (0xc000770000) (3) Data frame handling\nI0509 12:09:34.803895 2882 log.go:172] (0xc000770000) (3) Data frame sent\nI0509 12:09:34.803899 2882 log.go:172] (0xc00015c840) Data frame received for 3\nI0509 12:09:34.803902 2882 log.go:172] (0xc000770000) (3) Data frame handling\nI0509 12:09:34.805013 2882 log.go:172] (0xc00015c840) Data frame received for 1\nI0509 12:09:34.805025 2882 log.go:172] (0xc0005e32c0) (1) Data frame handling\nI0509 12:09:34.805034 2882 log.go:172] (0xc0005e32c0) (1) Data frame sent\nI0509 12:09:34.805441 2882 log.go:172] (0xc00015c840) (0xc0005e32c0) Stream removed, broadcasting: 1\nI0509 12:09:34.805492 2882 log.go:172] (0xc00015c840) Go away received\nI0509 12:09:34.805575 2882 log.go:172] (0xc00015c840) (0xc0005e32c0) Stream removed, broadcasting: 1\nI0509 12:09:34.805586 2882 log.go:172] (0xc00015c840) (0xc000770000) Stream removed, broadcasting: 3\nI0509 12:09:34.805593 2882 log.go:172] (0xc00015c840) (0xc000672000) Stream removed, broadcasting: 5\n" May 9 12:09:34.809: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 9 12:09:34.809: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 9 12:09:34.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jtqw6 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 9 12:09:35.020: INFO: stderr: "I0509 12:09:34.938375 2904 log.go:172] (0xc00082a2c0) (0xc000687540) Create stream\nI0509 12:09:34.938441 2904 log.go:172] (0xc00082a2c0) (0xc000687540) Stream added, broadcasting: 1\nI0509 12:09:34.941474 2904 log.go:172] (0xc00082a2c0) Reply frame received for 1\nI0509 12:09:34.941509 2904 log.go:172] (0xc00082a2c0) (0xc0001be500) Create stream\nI0509 12:09:34.941521 2904 log.go:172] (0xc00082a2c0) (0xc0001be500) Stream added, broadcasting: 3\nI0509 12:09:34.942259 2904 log.go:172] (0xc00082a2c0) Reply frame received for 3\nI0509 12:09:34.942290 2904 log.go:172] (0xc00082a2c0) (0xc000454000) Create stream\nI0509 12:09:34.942311 2904 log.go:172] (0xc00082a2c0) (0xc000454000) Stream added, broadcasting: 5\nI0509 12:09:34.943123 2904 log.go:172] (0xc00082a2c0) Reply frame received for 5\nI0509 12:09:35.014204 2904 log.go:172] (0xc00082a2c0) Data frame received for 3\nI0509 12:09:35.014250 2904 log.go:172] (0xc0001be500) (3) Data frame handling\nI0509 12:09:35.014285 2904 log.go:172] (0xc0001be500) (3) Data frame sent\nI0509 12:09:35.014312 2904 log.go:172] (0xc00082a2c0) Data frame received for 5\nI0509 12:09:35.014380 2904 log.go:172] (0xc000454000) (5) Data frame handling\nI0509 12:09:35.014427 2904 log.go:172] (0xc00082a2c0) Data frame received for 3\nI0509 12:09:35.014467 2904 log.go:172] (0xc0001be500) (3) Data frame handling\nI0509 12:09:35.015762 2904 log.go:172] (0xc00082a2c0) Data frame received for 1\nI0509 12:09:35.015794 2904 log.go:172] (0xc000687540) (1) Data frame handling\nI0509 12:09:35.015816 2904 log.go:172] (0xc000687540) (1) Data frame sent\nI0509 12:09:35.015834 2904 log.go:172] (0xc00082a2c0) (0xc000687540) Stream removed, broadcasting: 1\nI0509 12:09:35.015890 2904 log.go:172] (0xc00082a2c0) Go away received\nI0509 12:09:35.016025 2904 log.go:172] (0xc00082a2c0) (0xc000687540) Stream removed, broadcasting: 1\nI0509 12:09:35.016047 2904 log.go:172] (0xc00082a2c0) (0xc0001be500) Stream removed, broadcasting: 3\nI0509 12:09:35.016062 2904 log.go:172] (0xc00082a2c0) (0xc000454000) Stream removed, broadcasting: 5\n" May 9 12:09:35.020: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 9 12:09:35.020: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 9 12:09:35.020: INFO: Waiting for statefulset status.replicas updated to 0 May 9 12:09:35.023: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 9 12:09:45.030: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 9 12:09:45.030: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 9 12:09:45.030: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 9 12:09:45.052: INFO: POD NODE PHASE GRACE CONDITIONS May 9 12:09:45.052: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:52 +0000 UTC }] May 9 12:09:45.052: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC }] May 9 12:09:45.052: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC }] May 9 12:09:45.052: INFO: May 9 12:09:45.052: INFO: StatefulSet ss has not reached scale 0, at 3 May 9 12:09:46.059: INFO: POD NODE PHASE GRACE CONDITIONS May 9 12:09:46.059: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:52 +0000 UTC }] May 9 12:09:46.060: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC }] May 9 12:09:46.060: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC }] May 9 12:09:46.060: INFO: May 9 12:09:46.060: INFO: StatefulSet ss has not reached scale 0, at 3 May 9 12:09:47.064: INFO: POD NODE PHASE GRACE CONDITIONS May 9 12:09:47.064: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:52 +0000 UTC }] May 9 12:09:47.064: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC }] May 9 12:09:47.064: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC }] May 9 12:09:47.064: INFO: May 9 12:09:47.064: INFO: StatefulSet ss has not reached scale 0, at 3 May 9 12:09:48.070: INFO: POD NODE PHASE GRACE CONDITIONS May 9 12:09:48.070: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:52 +0000 UTC }] May 9 12:09:48.070: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC }] May 9 12:09:48.070: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC }] May 9 12:09:48.070: INFO: May 9 12:09:48.070: INFO: StatefulSet ss has not reached scale 0, at 3 May 9 12:09:49.075: INFO: POD NODE PHASE GRACE CONDITIONS May 9 12:09:49.075: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:52 +0000 UTC }] May 9 12:09:49.075: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC }] May 9 12:09:49.075: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC }] May 9 12:09:49.076: INFO: May 9 12:09:49.076: INFO: StatefulSet ss has not reached scale 0, at 3 May 9 12:09:50.080: INFO: POD NODE PHASE GRACE CONDITIONS May 9 12:09:50.080: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:52 +0000 UTC }] May 9 12:09:50.080: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC }] May 9 12:09:50.080: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC }] May 9 12:09:50.080: INFO: May 9 12:09:50.080: INFO: StatefulSet ss has not reached scale 0, at 3 May 9 12:09:51.086: INFO: POD NODE PHASE GRACE CONDITIONS May 9 12:09:51.086: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:08:52 +0000 UTC }] May 9 12:09:51.086: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC }] May 9 12:09:51.086: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:09:13 +0000 UTC }] May 9 12:09:51.086: INFO: May 9 12:09:51.086: INFO: StatefulSet ss has not reached scale 0, at 3 May 9 12:09:52.091: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.949374357s May 9 12:09:53.094: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.945057799s May 9 12:09:54.099: INFO: Verifying statefulset ss doesn't scale past 0 for another 941.120978ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-jtqw6 May 9 12:09:55.102: INFO: Scaling statefulset ss to 0 May 9 12:09:55.111: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 9 12:09:55.114: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jtqw6 May 9 12:09:55.116: INFO: Scaling statefulset ss to 0 May 9 12:09:55.125: INFO: Waiting for statefulset status.replicas updated to 0 May 9 12:09:55.127: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:09:55.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-jtqw6" for this suite. May 9 12:10:01.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:10:01.220: INFO: namespace: e2e-tests-statefulset-jtqw6, resource: bindings, ignored listing per whitelist May 9 12:10:01.227: INFO: namespace e2e-tests-statefulset-jtqw6 deletion completed in 6.085385063s • [SLOW TEST:68.412 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:10:01.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-023b0544-91ee-11ea-a20c-0242ac110018 STEP: Creating secret with name s-test-opt-upd-023b060f-91ee-11ea-a20c-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-023b0544-91ee-11ea-a20c-0242ac110018 STEP: Updating secret s-test-opt-upd-023b060f-91ee-11ea-a20c-0242ac110018 STEP: Creating secret with name s-test-opt-create-023b0660-91ee-11ea-a20c-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:10:09.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4d56n" for this suite. May 9 12:10:31.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:10:31.625: INFO: namespace: e2e-tests-projected-4d56n, resource: bindings, ignored listing per whitelist May 9 12:10:31.652: INFO: namespace e2e-tests-projected-4d56n deletion completed in 22.101676828s • [SLOW TEST:30.425 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:10:31.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-14513b32-91ee-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume secrets May 9 12:10:31.749: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1452802f-91ee-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-v226b" to be "success or failure" May 9 12:10:31.799: INFO: Pod "pod-projected-secrets-1452802f-91ee-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 49.916528ms May 9 12:10:33.804: INFO: Pod "pod-projected-secrets-1452802f-91ee-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05431973s May 9 12:10:35.809: INFO: Pod "pod-projected-secrets-1452802f-91ee-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059895302s STEP: Saw pod success May 9 12:10:35.809: INFO: Pod "pod-projected-secrets-1452802f-91ee-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:10:35.812: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-1452802f-91ee-11ea-a20c-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 9 12:10:35.867: INFO: Waiting for pod pod-projected-secrets-1452802f-91ee-11ea-a20c-0242ac110018 to disappear May 9 12:10:35.873: INFO: Pod pod-projected-secrets-1452802f-91ee-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:10:35.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-v226b" for this suite. May 9 12:10:41.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:10:41.930: INFO: namespace: e2e-tests-projected-v226b, resource: bindings, ignored listing per whitelist May 9 12:10:41.960: INFO: namespace e2e-tests-projected-v226b deletion completed in 6.084332221s • [SLOW TEST:10.308 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:10:41.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 9 12:10:42.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-rgkpj' May 9 12:10:42.176: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 9 12:10:42.176: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 9 12:10:44.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-rgkpj' May 9 12:10:44.382: INFO: stderr: "" May 9 12:10:44.382: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:10:44.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rgkpj" for this suite. May 9 12:10:50.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:10:50.551: INFO: namespace: e2e-tests-kubectl-rgkpj, resource: bindings, ignored listing per whitelist May 9 12:10:50.591: INFO: namespace e2e-tests-kubectl-rgkpj deletion completed in 6.205057459s • [SLOW TEST:8.631 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:10:50.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-bkp4k in namespace e2e-tests-proxy-pkq62 I0509 12:10:50.737472 6 runners.go:184] Created replication controller with name: proxy-service-bkp4k, namespace: e2e-tests-proxy-pkq62, replica count: 1 I0509 12:10:51.787943 6 runners.go:184] proxy-service-bkp4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 12:10:52.788227 6 runners.go:184] proxy-service-bkp4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0509 12:10:53.788456 6 runners.go:184] proxy-service-bkp4k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0509 12:10:54.788731 6 runners.go:184] proxy-service-bkp4k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0509 12:10:55.788951 6 runners.go:184] proxy-service-bkp4k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0509 12:10:56.789373 6 runners.go:184] proxy-service-bkp4k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0509 12:10:57.789607 6 runners.go:184] proxy-service-bkp4k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0509 12:10:58.789823 6 runners.go:184] proxy-service-bkp4k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0509 12:10:59.790074 6 runners.go:184] proxy-service-bkp4k Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 9 12:10:59.794: INFO: setup took 9.123632337s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 9 12:10:59.802: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-pkq62/pods/proxy-service-bkp4k-vg6rv:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-2fe14c93-91ee-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 12:11:18.007: INFO: Waiting up to 5m0s for pod "pod-configmaps-2fe56215-91ee-11ea-a20c-0242ac110018" in namespace "e2e-tests-configmap-nvr95" to be "success or failure" May 9 12:11:18.018: INFO: Pod "pod-configmaps-2fe56215-91ee-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.51504ms May 9 12:11:20.022: INFO: Pod "pod-configmaps-2fe56215-91ee-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015049169s May 9 12:11:22.027: INFO: Pod "pod-configmaps-2fe56215-91ee-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019329569s STEP: Saw pod success May 9 12:11:22.027: INFO: Pod "pod-configmaps-2fe56215-91ee-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:11:22.030: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-2fe56215-91ee-11ea-a20c-0242ac110018 container configmap-volume-test: STEP: delete the pod May 9 12:11:22.089: INFO: Waiting for pod pod-configmaps-2fe56215-91ee-11ea-a20c-0242ac110018 to disappear May 9 12:11:22.168: INFO: Pod pod-configmaps-2fe56215-91ee-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:11:22.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nvr95" for this suite. May 9 12:11:28.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:11:28.300: INFO: namespace: e2e-tests-configmap-nvr95, resource: bindings, ignored listing per whitelist May 9 12:11:28.314: INFO: namespace e2e-tests-configmap-nvr95 deletion completed in 6.140963475s • [SLOW TEST:10.422 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:11:28.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 9 12:11:28.451: INFO: Waiting up to 5m0s for pod "downwardapi-volume-361d041d-91ee-11ea-a20c-0242ac110018" in namespace "e2e-tests-projected-j82rk" to be "success or failure" May 9 12:11:28.461: INFO: Pod "downwardapi-volume-361d041d-91ee-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.737258ms May 9 12:11:30.902: INFO: Pod "downwardapi-volume-361d041d-91ee-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.451527546s May 9 12:11:32.907: INFO: Pod "downwardapi-volume-361d041d-91ee-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.456298549s May 9 12:11:34.911: INFO: Pod "downwardapi-volume-361d041d-91ee-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.460350736s STEP: Saw pod success May 9 12:11:34.911: INFO: Pod "downwardapi-volume-361d041d-91ee-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:11:34.914: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-361d041d-91ee-11ea-a20c-0242ac110018 container client-container: STEP: delete the pod May 9 12:11:34.929: INFO: Waiting for pod downwardapi-volume-361d041d-91ee-11ea-a20c-0242ac110018 to disappear May 9 12:11:34.934: INFO: Pod downwardapi-volume-361d041d-91ee-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:11:34.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j82rk" for this suite. May 9 12:11:40.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:11:41.014: INFO: namespace: e2e-tests-projected-j82rk, resource: bindings, ignored listing per whitelist May 9 12:11:41.023: INFO: namespace e2e-tests-projected-j82rk deletion completed in 6.086616116s • [SLOW TEST:12.709 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:11:41.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 9 12:11:49.276: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 9 12:11:49.282: INFO: Pod pod-with-poststart-http-hook still exists May 9 12:11:51.282: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 9 12:11:51.327: INFO: Pod pod-with-poststart-http-hook still exists May 9 12:11:53.282: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 9 12:11:53.303: INFO: Pod pod-with-poststart-http-hook still exists May 9 12:11:55.282: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 9 12:11:55.286: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:11:55.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mhs4r" for this suite. May 9 12:12:17.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:12:17.413: INFO: namespace: e2e-tests-container-lifecycle-hook-mhs4r, resource: bindings, ignored listing per whitelist May 9 12:12:17.433: INFO: namespace e2e-tests-container-lifecycle-hook-mhs4r deletion completed in 22.143046221s • [SLOW TEST:36.410 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:12:17.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 9 12:12:17.540: INFO: Pod name pod-release: Found 0 pods out of 1 May 9 12:12:22.544: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:12:23.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-nrdsl" for this suite. May 9 12:12:31.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:12:31.716: INFO: namespace: e2e-tests-replication-controller-nrdsl, resource: bindings, ignored listing per whitelist May 9 12:12:31.718: INFO: namespace e2e-tests-replication-controller-nrdsl deletion completed in 8.111654265s • [SLOW TEST:14.284 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:12:31.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-5c192a34-91ee-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume secrets May 9 12:12:32.846: INFO: Waiting up to 5m0s for pod "pod-secrets-5c801fa1-91ee-11ea-a20c-0242ac110018" in namespace "e2e-tests-secrets-h4xj8" to be "success or failure" May 9 12:12:32.894: INFO: Pod "pod-secrets-5c801fa1-91ee-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 48.418775ms May 9 12:12:34.898: INFO: Pod "pod-secrets-5c801fa1-91ee-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052224663s May 9 12:12:36.902: INFO: Pod "pod-secrets-5c801fa1-91ee-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.05627469s May 9 12:12:38.905: INFO: Pod "pod-secrets-5c801fa1-91ee-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059151927s STEP: Saw pod success May 9 12:12:38.905: INFO: Pod "pod-secrets-5c801fa1-91ee-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:12:38.908: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-5c801fa1-91ee-11ea-a20c-0242ac110018 container secret-volume-test: STEP: delete the pod May 9 12:12:39.030: INFO: Waiting for pod pod-secrets-5c801fa1-91ee-11ea-a20c-0242ac110018 to disappear May 9 12:12:39.068: INFO: Pod pod-secrets-5c801fa1-91ee-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:12:39.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-h4xj8" for this suite. May 9 12:12:45.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:12:45.357: INFO: namespace: e2e-tests-secrets-h4xj8, resource: bindings, ignored listing per whitelist May 9 12:12:45.357: INFO: namespace e2e-tests-secrets-h4xj8 deletion completed in 6.283065111s STEP: Destroying namespace "e2e-tests-secret-namespace-b28gg" for this suite. May 9 12:12:51.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:12:51.597: INFO: namespace: e2e-tests-secret-namespace-b28gg, resource: bindings, ignored listing per whitelist May 9 12:12:51.606: INFO: namespace e2e-tests-secret-namespace-b28gg deletion completed in 6.248578757s • [SLOW TEST:19.888 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:12:51.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 9 12:12:56.309: INFO: Successfully updated pod "annotationupdate67c8510c-91ee-11ea-a20c-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:12:58.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rr6vx" for this suite. May 9 12:13:20.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:13:20.412: INFO: namespace: e2e-tests-downward-api-rr6vx, resource: bindings, ignored listing per whitelist May 9 12:13:20.465: INFO: namespace e2e-tests-downward-api-rr6vx deletion completed in 22.101871227s • [SLOW TEST:28.857 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:13:20.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 9 12:13:20.586: INFO: Waiting up to 5m0s for pod "downward-api-78f3b806-91ee-11ea-a20c-0242ac110018" in namespace "e2e-tests-downward-api-d695b" to be "success or failure" May 9 12:13:20.590: INFO: Pod "downward-api-78f3b806-91ee-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.363064ms May 9 12:13:22.609: INFO: Pod "downward-api-78f3b806-91ee-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023403918s May 9 12:13:24.613: INFO: Pod "downward-api-78f3b806-91ee-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.027490779s May 9 12:13:26.618: INFO: Pod "downward-api-78f3b806-91ee-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031939271s STEP: Saw pod success May 9 12:13:26.618: INFO: Pod "downward-api-78f3b806-91ee-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:13:26.620: INFO: Trying to get logs from node hunter-worker pod downward-api-78f3b806-91ee-11ea-a20c-0242ac110018 container dapi-container: STEP: delete the pod May 9 12:13:26.654: INFO: Waiting for pod downward-api-78f3b806-91ee-11ea-a20c-0242ac110018 to disappear May 9 12:13:26.657: INFO: Pod downward-api-78f3b806-91ee-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:13:26.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-d695b" for this suite. May 9 12:13:32.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:13:32.710: INFO: namespace: e2e-tests-downward-api-d695b, resource: bindings, ignored listing per whitelist May 9 12:13:32.763: INFO: namespace e2e-tests-downward-api-d695b deletion completed in 6.102835334s • [SLOW TEST:12.298 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:13:32.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0509 12:14:14.540344 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 9 12:14:14.540: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:14:14.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-5sw7k" for this suite. May 9 12:14:24.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:14:24.727: INFO: namespace: e2e-tests-gc-5sw7k, resource: bindings, ignored listing per whitelist May 9 12:14:24.756: INFO: namespace e2e-tests-gc-5sw7k deletion completed in 10.21160763s • [SLOW TEST:51.993 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:14:24.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 9 12:14:25.228: INFO: Waiting up to 5m0s for pod "client-containers-9f7af6e1-91ee-11ea-a20c-0242ac110018" in namespace "e2e-tests-containers-4p4rl" to be "success or failure" May 9 12:14:25.302: INFO: Pod "client-containers-9f7af6e1-91ee-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 73.557724ms May 9 12:14:27.306: INFO: Pod "client-containers-9f7af6e1-91ee-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077650659s May 9 12:14:29.310: INFO: Pod "client-containers-9f7af6e1-91ee-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.081942511s May 9 12:14:31.314: INFO: Pod "client-containers-9f7af6e1-91ee-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086254954s STEP: Saw pod success May 9 12:14:31.314: INFO: Pod "client-containers-9f7af6e1-91ee-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:14:31.317: INFO: Trying to get logs from node hunter-worker2 pod client-containers-9f7af6e1-91ee-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 12:14:31.362: INFO: Waiting for pod client-containers-9f7af6e1-91ee-11ea-a20c-0242ac110018 to disappear May 9 12:14:31.370: INFO: Pod client-containers-9f7af6e1-91ee-11ea-a20c-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:14:31.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-4p4rl" for this suite. May 9 12:14:37.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:14:37.422: INFO: namespace: e2e-tests-containers-4p4rl, resource: bindings, ignored listing per whitelist May 9 12:14:37.448: INFO: namespace e2e-tests-containers-4p4rl deletion completed in 6.07489501s • [SLOW TEST:12.692 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:14:37.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 9 12:14:44.373: INFO: 8 pods remaining May 9 12:14:44.373: INFO: 7 pods has nil DeletionTimestamp May 9 12:14:44.373: INFO: May 9 12:14:45.958: INFO: 0 pods remaining May 9 12:14:45.958: INFO: 0 pods has nil DeletionTimestamp May 9 12:14:45.958: INFO: May 9 12:14:46.781: INFO: 0 pods remaining May 9 12:14:46.781: INFO: 0 pods has nil DeletionTimestamp May 9 12:14:46.781: INFO: STEP: Gathering metrics W0509 12:14:47.844173 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 9 12:14:47.844: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:14:47.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-8gmnc" for this suite. May 9 12:14:54.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:14:54.249: INFO: namespace: e2e-tests-gc-8gmnc, resource: bindings, ignored listing per whitelist May 9 12:14:54.271: INFO: namespace e2e-tests-gc-8gmnc deletion completed in 6.424320619s • [SLOW TEST:16.822 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:14:54.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:15:00.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-4g8gr" for this suite. May 9 12:15:06.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:15:06.637: INFO: namespace: e2e-tests-namespaces-4g8gr, resource: bindings, ignored listing per whitelist May 9 12:15:06.649: INFO: namespace e2e-tests-namespaces-4g8gr deletion completed in 6.085091428s STEP: Destroying namespace "e2e-tests-nsdeletetest-ns7xl" for this suite. May 9 12:15:06.651: INFO: Namespace e2e-tests-nsdeletetest-ns7xl was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-fm8tq" for this suite. May 9 12:15:12.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:15:12.741: INFO: namespace: e2e-tests-nsdeletetest-fm8tq, resource: bindings, ignored listing per whitelist May 9 12:15:12.797: INFO: namespace e2e-tests-nsdeletetest-fm8tq deletion completed in 6.146015138s • [SLOW TEST:18.526 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:15:12.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 9 12:15:12.953: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 12:15:12.959: INFO: Number of nodes with available pods: 0 May 9 12:15:12.959: INFO: Node hunter-worker is running more than one daemon pod May 9 12:15:13.965: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 12:15:13.969: INFO: Number of nodes with available pods: 0 May 9 12:15:13.969: INFO: Node hunter-worker is running more than one daemon pod May 9 12:15:15.277: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 12:15:15.467: INFO: Number of nodes with available pods: 0 May 9 12:15:15.467: INFO: Node hunter-worker is running more than one daemon pod May 9 12:15:16.042: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 12:15:16.044: INFO: Number of nodes with available pods: 0 May 9 12:15:16.044: INFO: Node hunter-worker is running more than one daemon pod May 9 12:15:17.019: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 12:15:17.022: INFO: Number of nodes with available pods: 0 May 9 12:15:17.022: INFO: Node hunter-worker is running more than one daemon pod May 9 12:15:17.963: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 12:15:17.966: INFO: Number of nodes with available pods: 1 May 9 12:15:17.966: INFO: Node hunter-worker2 is running more than one daemon pod May 9 12:15:18.967: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 12:15:18.970: INFO: Number of nodes with available pods: 2 May 9 12:15:18.970: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 9 12:15:19.127: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 9 12:15:19.379: INFO: Number of nodes with available pods: 2 May 9 12:15:19.379: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-lgl6l, will wait for the garbage collector to delete the pods May 9 12:15:20.630: INFO: Deleting DaemonSet.extensions daemon-set took: 5.231726ms May 9 12:15:20.730: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.193935ms May 9 12:15:31.341: INFO: Number of nodes with available pods: 0 May 9 12:15:31.341: INFO: Number of running nodes: 0, number of available pods: 0 May 9 12:15:31.344: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lgl6l/daemonsets","resourceVersion":"9590165"},"items":null} May 9 12:15:31.347: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lgl6l/pods","resourceVersion":"9590165"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:15:31.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-lgl6l" for this suite. May 9 12:15:37.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:15:37.404: INFO: namespace: e2e-tests-daemonsets-lgl6l, resource: bindings, ignored listing per whitelist May 9 12:15:37.436: INFO: namespace e2e-tests-daemonsets-lgl6l deletion completed in 6.075915062s • [SLOW TEST:24.639 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:15:37.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xmvsg May 9 12:15:43.554: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xmvsg STEP: checking the pod's current state and verifying that restartCount is present May 9 12:15:43.556: INFO: Initial restart count of pod liveness-http is 0 May 9 12:16:01.595: INFO: Restart count of pod e2e-tests-container-probe-xmvsg/liveness-http is now 1 (18.039070163s elapsed) May 9 12:16:21.811: INFO: Restart count of pod e2e-tests-container-probe-xmvsg/liveness-http is now 2 (38.255099244s elapsed) May 9 12:16:41.849: INFO: Restart count of pod e2e-tests-container-probe-xmvsg/liveness-http is now 3 (58.293128805s elapsed) May 9 12:17:04.102: INFO: Restart count of pod e2e-tests-container-probe-xmvsg/liveness-http is now 4 (1m20.54670899s elapsed) May 9 12:18:08.766: INFO: Restart count of pod e2e-tests-container-probe-xmvsg/liveness-http is now 5 (2m25.210856299s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:18:09.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xmvsg" for this suite. May 9 12:18:15.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:18:15.520: INFO: namespace: e2e-tests-container-probe-xmvsg, resource: bindings, ignored listing per whitelist May 9 12:18:15.575: INFO: namespace e2e-tests-container-probe-xmvsg deletion completed in 6.327488267s • [SLOW TEST:158.138 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:18:15.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-2911b8b6-91ef-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume secrets May 9 12:18:16.131: INFO: Waiting up to 5m0s for pod "pod-secrets-29192549-91ef-11ea-a20c-0242ac110018" in namespace "e2e-tests-secrets-nbtdw" to be "success or failure" May 9 12:18:16.408: INFO: Pod "pod-secrets-29192549-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 276.451415ms May 9 12:18:18.605: INFO: Pod "pod-secrets-29192549-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474050957s May 9 12:18:20.780: INFO: Pod "pod-secrets-29192549-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.649231582s May 9 12:18:22.828: INFO: Pod "pod-secrets-29192549-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.697259035s May 9 12:18:24.954: INFO: Pod "pod-secrets-29192549-91ef-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.823102489s STEP: Saw pod success May 9 12:18:24.954: INFO: Pod "pod-secrets-29192549-91ef-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:18:24.958: INFO: Trying to get logs from node hunter-worker pod pod-secrets-29192549-91ef-11ea-a20c-0242ac110018 container secret-volume-test: STEP: delete the pod May 9 12:18:25.094: INFO: Waiting for pod pod-secrets-29192549-91ef-11ea-a20c-0242ac110018 to disappear May 9 12:18:25.337: INFO: Pod pod-secrets-29192549-91ef-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:18:25.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nbtdw" for this suite. May 9 12:18:33.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:18:33.542: INFO: namespace: e2e-tests-secrets-nbtdw, resource: bindings, ignored listing per whitelist May 9 12:18:33.564: INFO: namespace e2e-tests-secrets-nbtdw deletion completed in 8.223352573s • [SLOW TEST:17.988 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:18:33.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-zlgxk STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-zlgxk STEP: Deleting pre-stop pod May 9 12:18:56.112: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:18:56.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-zlgxk" for this suite. May 9 12:19:36.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:19:36.497: INFO: namespace: e2e-tests-prestop-zlgxk, resource: bindings, ignored listing per whitelist May 9 12:19:36.520: INFO: namespace e2e-tests-prestop-zlgxk deletion completed in 40.327966411s • [SLOW TEST:62.956 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:19:36.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 9 12:19:36.600: INFO: Waiting up to 5m0s for pod "client-containers-59139c73-91ef-11ea-a20c-0242ac110018" in namespace "e2e-tests-containers-5l8rq" to be "success or failure" May 9 12:19:36.604: INFO: Pod "client-containers-59139c73-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371868ms May 9 12:19:38.608: INFO: Pod "client-containers-59139c73-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008146891s May 9 12:19:40.612: INFO: Pod "client-containers-59139c73-91ef-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011917597s STEP: Saw pod success May 9 12:19:40.612: INFO: Pod "client-containers-59139c73-91ef-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:19:40.615: INFO: Trying to get logs from node hunter-worker pod client-containers-59139c73-91ef-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 12:19:40.717: INFO: Waiting for pod client-containers-59139c73-91ef-11ea-a20c-0242ac110018 to disappear May 9 12:19:40.759: INFO: Pod client-containers-59139c73-91ef-11ea-a20c-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:19:40.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-5l8rq" for this suite. May 9 12:19:46.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:19:46.792: INFO: namespace: e2e-tests-containers-5l8rq, resource: bindings, ignored listing per whitelist May 9 12:19:46.836: INFO: namespace e2e-tests-containers-5l8rq deletion completed in 6.072965011s • [SLOW TEST:10.316 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:19:46.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 9 12:19:46.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-f7dc8' May 9 12:19:49.471: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 9 12:19:49.471: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 9 12:19:51.481: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-79sk9] May 9 12:19:51.481: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-79sk9" in namespace "e2e-tests-kubectl-f7dc8" to be "running and ready" May 9 12:19:51.483: INFO: Pod "e2e-test-nginx-rc-79sk9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.892991ms May 9 12:19:53.487: INFO: Pod "e2e-test-nginx-rc-79sk9": Phase="Running", Reason="", readiness=true. Elapsed: 2.006035181s May 9 12:19:53.487: INFO: Pod "e2e-test-nginx-rc-79sk9" satisfied condition "running and ready" May 9 12:19:53.487: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-79sk9] May 9 12:19:53.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-f7dc8' May 9 12:19:53.601: INFO: stderr: "" May 9 12:19:53.601: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 9 12:19:53.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-f7dc8' May 9 12:19:53.708: INFO: stderr: "" May 9 12:19:53.708: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:19:53.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-f7dc8" for this suite. May 9 12:20:15.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:20:15.801: INFO: namespace: e2e-tests-kubectl-f7dc8, resource: bindings, ignored listing per whitelist May 9 12:20:15.810: INFO: namespace e2e-tests-kubectl-f7dc8 deletion completed in 22.087523723s • [SLOW TEST:28.974 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:20:15.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 9 12:20:17.187: INFO: created pod pod-service-account-defaultsa May 9 12:20:17.187: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 9 12:20:17.802: INFO: created pod pod-service-account-mountsa May 9 12:20:17.802: INFO: pod pod-service-account-mountsa service account token volume mount: true May 9 12:20:17.894: INFO: created pod pod-service-account-nomountsa May 9 12:20:17.894: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 9 12:20:17.964: INFO: created pod pod-service-account-defaultsa-mountspec May 9 12:20:17.964: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 9 12:20:18.008: INFO: created pod pod-service-account-mountsa-mountspec May 9 12:20:18.008: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 9 12:20:18.038: INFO: created pod pod-service-account-nomountsa-mountspec May 9 12:20:18.038: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 9 12:20:18.107: INFO: created pod pod-service-account-defaultsa-nomountspec May 9 12:20:18.107: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 9 12:20:18.134: INFO: created pod pod-service-account-mountsa-nomountspec May 9 12:20:18.134: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 9 12:20:18.176: INFO: created pod pod-service-account-nomountsa-nomountspec May 9 12:20:18.176: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:20:18.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-8lctp" for this suite. May 9 12:20:52.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:20:52.329: INFO: namespace: e2e-tests-svcaccounts-8lctp, resource: bindings, ignored listing per whitelist May 9 12:20:52.341: INFO: namespace e2e-tests-svcaccounts-8lctp deletion completed in 34.140249953s • [SLOW TEST:36.530 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:20:52.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 9 12:20:52.443: INFO: Waiting up to 5m0s for pod "downward-api-86493e22-91ef-11ea-a20c-0242ac110018" in namespace "e2e-tests-downward-api-tmw2h" to be "success or failure" May 9 12:20:52.461: INFO: Pod "downward-api-86493e22-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.839836ms May 9 12:20:54.465: INFO: Pod "downward-api-86493e22-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021795797s May 9 12:20:56.471: INFO: Pod "downward-api-86493e22-91ef-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.027757435s May 9 12:20:58.476: INFO: Pod "downward-api-86493e22-91ef-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033079836s STEP: Saw pod success May 9 12:20:58.476: INFO: Pod "downward-api-86493e22-91ef-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:20:58.480: INFO: Trying to get logs from node hunter-worker pod downward-api-86493e22-91ef-11ea-a20c-0242ac110018 container dapi-container: STEP: delete the pod May 9 12:20:58.512: INFO: Waiting for pod downward-api-86493e22-91ef-11ea-a20c-0242ac110018 to disappear May 9 12:20:58.537: INFO: Pod downward-api-86493e22-91ef-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:20:58.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tmw2h" for this suite. May 9 12:21:04.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:21:04.678: INFO: namespace: e2e-tests-downward-api-tmw2h, resource: bindings, ignored listing per whitelist May 9 12:21:04.688: INFO: namespace e2e-tests-downward-api-tmw2h deletion completed in 6.147246972s • [SLOW TEST:12.347 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:21:04.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-k527m/secret-test-8db8dad9-91ef-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume secrets May 9 12:21:04.962: INFO: Waiting up to 5m0s for pod "pod-configmaps-8dbd5c4d-91ef-11ea-a20c-0242ac110018" in namespace "e2e-tests-secrets-k527m" to be "success or failure" May 9 12:21:04.972: INFO: Pod "pod-configmaps-8dbd5c4d-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.016349ms May 9 12:21:07.040: INFO: Pod "pod-configmaps-8dbd5c4d-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078428787s May 9 12:21:09.045: INFO: Pod "pod-configmaps-8dbd5c4d-91ef-11ea-a20c-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.083106614s May 9 12:21:11.049: INFO: Pod "pod-configmaps-8dbd5c4d-91ef-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087442718s STEP: Saw pod success May 9 12:21:11.049: INFO: Pod "pod-configmaps-8dbd5c4d-91ef-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:21:11.052: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-8dbd5c4d-91ef-11ea-a20c-0242ac110018 container env-test: STEP: delete the pod May 9 12:21:11.095: INFO: Waiting for pod pod-configmaps-8dbd5c4d-91ef-11ea-a20c-0242ac110018 to disappear May 9 12:21:11.106: INFO: Pod pod-configmaps-8dbd5c4d-91ef-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:21:11.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-k527m" for this suite. May 9 12:21:19.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:21:19.187: INFO: namespace: e2e-tests-secrets-k527m, resource: bindings, ignored listing per whitelist May 9 12:21:19.280: INFO: namespace e2e-tests-secrets-k527m deletion completed in 8.170562864s • [SLOW TEST:14.592 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:21:19.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 12:21:19.454: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 9 12:21:24.458: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 9 12:21:24.458: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 9 12:21:26.463: INFO: Creating deployment "test-rollover-deployment" May 9 12:21:26.501: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 9 12:21:28.508: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 9 12:21:28.515: INFO: Ensure that both replica sets have 1 created replica May 9 12:21:28.520: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 9 12:21:28.526: INFO: Updating deployment test-rollover-deployment May 9 12:21:28.526: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 9 12:21:30.669: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 9 12:21:30.674: INFO: Make sure deployment "test-rollover-deployment" is complete May 9 12:21:30.678: INFO: all replica sets need to contain the pod-template-hash label May 9 12:21:30.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 12:21:32.684: INFO: all replica sets need to contain the pod-template-hash label May 9 12:21:32.684: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 12:21:34.685: INFO: all replica sets need to contain the pod-template-hash label May 9 12:21:34.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623693, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 12:21:36.686: INFO: all replica sets need to contain the pod-template-hash label May 9 12:21:36.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623693, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 12:21:38.687: INFO: all replica sets need to contain the pod-template-hash label May 9 12:21:38.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623693, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 12:21:40.687: INFO: all replica sets need to contain the pod-template-hash label May 9 12:21:40.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623693, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 12:21:42.687: INFO: all replica sets need to contain the pod-template-hash label May 9 12:21:42.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623693, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724623686, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 12:21:44.687: INFO: May 9 12:21:44.687: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 9 12:21:44.696: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-cjdx5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cjdx5/deployments/test-rollover-deployment,UID:9a914478-91ef-11ea-99e8-0242ac110002,ResourceVersion:9591286,Generation:2,CreationTimestamp:2020-05-09 12:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-09 12:21:26 +0000 UTC 2020-05-09 12:21:26 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-09 12:21:43 +0000 UTC 2020-05-09 12:21:26 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 9 12:21:44.699: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-cjdx5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cjdx5/replicasets/test-rollover-deployment-5b8479fdb6,UID:9bcc2671-91ef-11ea-99e8-0242ac110002,ResourceVersion:9591277,Generation:2,CreationTimestamp:2020-05-09 12:21:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9a914478-91ef-11ea-99e8-0242ac110002 0xc00190fd57 0xc00190fd58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 9 12:21:44.699: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 9 12:21:44.699: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-cjdx5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cjdx5/replicasets/test-rollover-controller,UID:96614bfb-91ef-11ea-99e8-0242ac110002,ResourceVersion:9591285,Generation:2,CreationTimestamp:2020-05-09 12:21:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9a914478-91ef-11ea-99e8-0242ac110002 0xc00190f617 0xc00190f618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 9 12:21:44.699: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-cjdx5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cjdx5/replicasets/test-rollover-deployment-58494b7559,UID:9a9841d6-91ef-11ea-99e8-0242ac110002,ResourceVersion:9591237,Generation:2,CreationTimestamp:2020-05-09 12:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9a914478-91ef-11ea-99e8-0242ac110002 0xc00190f907 0xc00190f908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 9 12:21:44.701: INFO: Pod "test-rollover-deployment-5b8479fdb6-6c74p" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-6c74p,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-cjdx5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cjdx5/pods/test-rollover-deployment-5b8479fdb6-6c74p,UID:9bd7b138-91ef-11ea-99e8-0242ac110002,ResourceVersion:9591255,Generation:0,CreationTimestamp:2020-05-09 12:21:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 9bcc2671-91ef-11ea-99e8-0242ac110002 0xc00208c127 0xc00208c128}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bk8jv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bk8jv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-bk8jv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00208c1a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00208c1c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:21:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:21:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:21:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:21:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.183,StartTime:2020-05-09 12:21:28 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-09 12:21:32 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://061f779b10c13c4ecf351b484a0f1ea264b949b7628c417af1829c0be4560e76}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:21:44.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-cjdx5" for this suite. May 9 12:21:50.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:21:50.900: INFO: namespace: e2e-tests-deployment-cjdx5, resource: bindings, ignored listing per whitelist May 9 12:21:50.998: INFO: namespace e2e-tests-deployment-cjdx5 deletion completed in 6.294459217s • [SLOW TEST:31.718 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:21:50.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 9 12:21:51.140: INFO: Waiting up to 5m0s for pod "pod-a9414f49-91ef-11ea-a20c-0242ac110018" in namespace "e2e-tests-emptydir-q526d" to be "success or failure" May 9 12:21:51.159: INFO: Pod "pod-a9414f49-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.492781ms May 9 12:21:53.163: INFO: Pod "pod-a9414f49-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02216472s May 9 12:21:55.167: INFO: Pod "pod-a9414f49-91ef-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026248992s STEP: Saw pod success May 9 12:21:55.167: INFO: Pod "pod-a9414f49-91ef-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:21:55.169: INFO: Trying to get logs from node hunter-worker2 pod pod-a9414f49-91ef-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 12:21:55.190: INFO: Waiting for pod pod-a9414f49-91ef-11ea-a20c-0242ac110018 to disappear May 9 12:21:55.312: INFO: Pod pod-a9414f49-91ef-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:21:55.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-q526d" for this suite. May 9 12:22:03.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:22:03.413: INFO: namespace: e2e-tests-emptydir-q526d, resource: bindings, ignored listing per whitelist May 9 12:22:03.418: INFO: namespace e2e-tests-emptydir-q526d deletion completed in 8.102062476s • [SLOW TEST:12.420 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:22:03.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:22:03.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-gqdws" for this suite. May 9 12:22:09.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:22:09.669: INFO: namespace: e2e-tests-kubelet-test-gqdws, resource: bindings, ignored listing per whitelist May 9 12:22:09.675: INFO: namespace e2e-tests-kubelet-test-gqdws deletion completed in 6.09539916s • [SLOW TEST:6.257 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:22:09.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 9 12:22:09.803: INFO: Waiting up to 5m0s for pod "client-containers-b4615e1d-91ef-11ea-a20c-0242ac110018" in namespace "e2e-tests-containers-25lp9" to be "success or failure" May 9 12:22:09.808: INFO: Pod "client-containers-b4615e1d-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.243942ms May 9 12:22:11.899: INFO: Pod "client-containers-b4615e1d-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096192415s May 9 12:22:13.904: INFO: Pod "client-containers-b4615e1d-91ef-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100775202s STEP: Saw pod success May 9 12:22:13.904: INFO: Pod "client-containers-b4615e1d-91ef-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:22:13.907: INFO: Trying to get logs from node hunter-worker2 pod client-containers-b4615e1d-91ef-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 12:22:14.097: INFO: Waiting for pod client-containers-b4615e1d-91ef-11ea-a20c-0242ac110018 to disappear May 9 12:22:14.174: INFO: Pod client-containers-b4615e1d-91ef-11ea-a20c-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:22:14.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-25lp9" for this suite. May 9 12:22:20.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:22:20.240: INFO: namespace: e2e-tests-containers-25lp9, resource: bindings, ignored listing per whitelist May 9 12:22:20.257: INFO: namespace e2e-tests-containers-25lp9 deletion completed in 6.079102033s • [SLOW TEST:10.582 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:22:20.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-bdzm STEP: Creating a pod to test atomic-volume-subpath May 9 12:22:20.415: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bdzm" in namespace "e2e-tests-subpath-5jsdc" to be "success or failure" May 9 12:22:20.419: INFO: Pod "pod-subpath-test-configmap-bdzm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09911ms May 9 12:22:22.423: INFO: Pod "pod-subpath-test-configmap-bdzm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007387009s May 9 12:22:24.426: INFO: Pod "pod-subpath-test-configmap-bdzm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010922554s May 9 12:22:26.431: INFO: Pod "pod-subpath-test-configmap-bdzm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01556931s May 9 12:22:28.436: INFO: Pod "pod-subpath-test-configmap-bdzm": Phase="Running", Reason="", readiness=true. Elapsed: 8.020500986s May 9 12:22:30.448: INFO: Pod "pod-subpath-test-configmap-bdzm": Phase="Running", Reason="", readiness=false. Elapsed: 10.033222296s May 9 12:22:32.452: INFO: Pod "pod-subpath-test-configmap-bdzm": Phase="Running", Reason="", readiness=false. Elapsed: 12.036548787s May 9 12:22:34.456: INFO: Pod "pod-subpath-test-configmap-bdzm": Phase="Running", Reason="", readiness=false. Elapsed: 14.04034389s May 9 12:22:36.460: INFO: Pod "pod-subpath-test-configmap-bdzm": Phase="Running", Reason="", readiness=false. Elapsed: 16.044518192s May 9 12:22:38.465: INFO: Pod "pod-subpath-test-configmap-bdzm": Phase="Running", Reason="", readiness=false. Elapsed: 18.049322622s May 9 12:22:40.479: INFO: Pod "pod-subpath-test-configmap-bdzm": Phase="Running", Reason="", readiness=false. Elapsed: 20.063491616s May 9 12:22:42.484: INFO: Pod "pod-subpath-test-configmap-bdzm": Phase="Running", Reason="", readiness=false. Elapsed: 22.068364035s May 9 12:22:44.488: INFO: Pod "pod-subpath-test-configmap-bdzm": Phase="Running", Reason="", readiness=false. Elapsed: 24.072539765s May 9 12:22:46.492: INFO: Pod "pod-subpath-test-configmap-bdzm": Phase="Running", Reason="", readiness=false. Elapsed: 26.076971324s May 9 12:22:48.496: INFO: Pod "pod-subpath-test-configmap-bdzm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.080795369s STEP: Saw pod success May 9 12:22:48.496: INFO: Pod "pod-subpath-test-configmap-bdzm" satisfied condition "success or failure" May 9 12:22:48.499: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-bdzm container test-container-subpath-configmap-bdzm: STEP: delete the pod May 9 12:22:48.544: INFO: Waiting for pod pod-subpath-test-configmap-bdzm to disappear May 9 12:22:48.569: INFO: Pod pod-subpath-test-configmap-bdzm no longer exists STEP: Deleting pod pod-subpath-test-configmap-bdzm May 9 12:22:48.569: INFO: Deleting pod "pod-subpath-test-configmap-bdzm" in namespace "e2e-tests-subpath-5jsdc" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:22:48.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-5jsdc" for this suite. May 9 12:22:54.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:22:54.598: INFO: namespace: e2e-tests-subpath-5jsdc, resource: bindings, ignored listing per whitelist May 9 12:22:54.676: INFO: namespace e2e-tests-subpath-5jsdc deletion completed in 6.102388935s • [SLOW TEST:34.419 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:22:54.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 9 12:22:54.936: INFO: Waiting up to 5m0s for pod "pod-cf3e65aa-91ef-11ea-a20c-0242ac110018" in namespace "e2e-tests-emptydir-44n4x" to be "success or failure" May 9 12:22:54.948: INFO: Pod "pod-cf3e65aa-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.477592ms May 9 12:22:56.950: INFO: Pod "pod-cf3e65aa-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014031291s May 9 12:22:58.954: INFO: Pod "pod-cf3e65aa-91ef-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017930666s STEP: Saw pod success May 9 12:22:58.954: INFO: Pod "pod-cf3e65aa-91ef-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:22:58.956: INFO: Trying to get logs from node hunter-worker pod pod-cf3e65aa-91ef-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 12:22:59.140: INFO: Waiting for pod pod-cf3e65aa-91ef-11ea-a20c-0242ac110018 to disappear May 9 12:22:59.335: INFO: Pod pod-cf3e65aa-91ef-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:22:59.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-44n4x" for this suite. May 9 12:23:05.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:23:05.394: INFO: namespace: e2e-tests-emptydir-44n4x, resource: bindings, ignored listing per whitelist May 9 12:23:05.442: INFO: namespace e2e-tests-emptydir-44n4x deletion completed in 6.102949516s • [SLOW TEST:10.766 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:23:05.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-d59e2e27-91ef-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume secrets May 9 12:23:05.564: INFO: Waiting up to 5m0s for pod "pod-secrets-d59fb779-91ef-11ea-a20c-0242ac110018" in namespace "e2e-tests-secrets-gkjmz" to be "success or failure" May 9 12:23:05.587: INFO: Pod "pod-secrets-d59fb779-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.403874ms May 9 12:23:07.683: INFO: Pod "pod-secrets-d59fb779-91ef-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118157035s May 9 12:23:09.687: INFO: Pod "pod-secrets-d59fb779-91ef-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122586092s STEP: Saw pod success May 9 12:23:09.687: INFO: Pod "pod-secrets-d59fb779-91ef-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:23:09.690: INFO: Trying to get logs from node hunter-worker pod pod-secrets-d59fb779-91ef-11ea-a20c-0242ac110018 container secret-volume-test: STEP: delete the pod May 9 12:23:09.724: INFO: Waiting for pod pod-secrets-d59fb779-91ef-11ea-a20c-0242ac110018 to disappear May 9 12:23:09.736: INFO: Pod pod-secrets-d59fb779-91ef-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:23:09.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-gkjmz" for this suite. May 9 12:23:15.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:23:15.824: INFO: namespace: e2e-tests-secrets-gkjmz, resource: bindings, ignored listing per whitelist May 9 12:23:15.824: INFO: namespace e2e-tests-secrets-gkjmz deletion completed in 6.085284029s • [SLOW TEST:10.382 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:23:15.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-f4fzn May 9 12:23:19.986: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-f4fzn STEP: checking the pod's current state and verifying that restartCount is present May 9 12:23:19.989: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:27:21.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-f4fzn" for this suite. May 9 12:27:27.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:27:27.205: INFO: namespace: e2e-tests-container-probe-f4fzn, resource: bindings, ignored listing per whitelist May 9 12:27:27.209: INFO: namespace e2e-tests-container-probe-f4fzn deletion completed in 6.10221601s • [SLOW TEST:251.385 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:27:27.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 12:27:27.303: INFO: Creating deployment "test-recreate-deployment" May 9 12:27:27.316: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 9 12:27:27.325: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 9 12:27:29.334: INFO: Waiting deployment "test-recreate-deployment" to complete May 9 12:27:29.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724624047, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724624047, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724624047, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724624047, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 9 12:27:31.340: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 9 12:27:31.346: INFO: Updating deployment test-recreate-deployment May 9 12:27:31.347: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 9 12:27:31.598: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-b6tqp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b6tqp/deployments/test-recreate-deployment,UID:71a50e8b-91f0-11ea-99e8-0242ac110002,ResourceVersion:9592194,Generation:2,CreationTimestamp:2020-05-09 12:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-09 12:27:31 +0000 UTC 2020-05-09 12:27:31 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-09 12:27:31 +0000 UTC 2020-05-09 12:27:27 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 9 12:27:31.620: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-b6tqp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b6tqp/replicasets/test-recreate-deployment-589c4bfd,UID:742083c7-91f0-11ea-99e8-0242ac110002,ResourceVersion:9592192,Generation:1,CreationTimestamp:2020-05-09 12:27:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 71a50e8b-91f0-11ea-99e8-0242ac110002 0xc000cc5aaf 0xc000dbc010}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 9 12:27:31.620: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 9 12:27:31.620: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-b6tqp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b6tqp/replicasets/test-recreate-deployment-5bf7f65dc,UID:71a82f9b-91f0-11ea-99e8-0242ac110002,ResourceVersion:9592182,Generation:2,CreationTimestamp:2020-05-09 12:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 71a50e8b-91f0-11ea-99e8-0242ac110002 0xc000dbc1b0 0xc000dbc1b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 9 12:27:31.624: INFO: Pod "test-recreate-deployment-589c4bfd-8zpqv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-8zpqv,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-b6tqp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b6tqp/pods/test-recreate-deployment-589c4bfd-8zpqv,UID:74213f28-91f0-11ea-99e8-0242ac110002,ResourceVersion:9592195,Generation:0,CreationTimestamp:2020-05-09 12:27:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 742083c7-91f0-11ea-99e8-0242ac110002 0xc001595e7f 0xc001595e90}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z77wm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z77wm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z77wm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001595f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001595f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:27:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-09 12:27:31 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-09 12:27:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:27:31.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-b6tqp" for this suite. May 9 12:27:37.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:27:37.940: INFO: namespace: e2e-tests-deployment-b6tqp, resource: bindings, ignored listing per whitelist May 9 12:27:37.972: INFO: namespace e2e-tests-deployment-b6tqp deletion completed in 6.344540631s • [SLOW TEST:10.763 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:27:37.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-npsl6 STEP: creating a selector STEP: Creating the service pods in kubernetes May 9 12:27:38.049: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 9 12:28:02.224: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.189 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-npsl6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 12:28:02.224: INFO: >>> kubeConfig: /root/.kube/config I0509 12:28:02.263665 6 log.go:172] (0xc000db1ad0) (0xc001c57720) Create stream I0509 12:28:02.263718 6 log.go:172] (0xc000db1ad0) (0xc001c57720) Stream added, broadcasting: 1 I0509 12:28:02.267392 6 log.go:172] (0xc000db1ad0) Reply frame received for 1 I0509 12:28:02.267438 6 log.go:172] (0xc000db1ad0) (0xc002a2cf00) Create stream I0509 12:28:02.267456 6 log.go:172] (0xc000db1ad0) (0xc002a2cf00) Stream added, broadcasting: 3 I0509 12:28:02.269014 6 log.go:172] (0xc000db1ad0) Reply frame received for 3 I0509 12:28:02.269047 6 log.go:172] (0xc000db1ad0) (0xc0021774a0) Create stream I0509 12:28:02.269058 6 log.go:172] (0xc000db1ad0) (0xc0021774a0) Stream added, broadcasting: 5 I0509 12:28:02.270193 6 log.go:172] (0xc000db1ad0) Reply frame received for 5 I0509 12:28:03.335585 6 log.go:172] (0xc000db1ad0) Data frame received for 3 I0509 12:28:03.335634 6 log.go:172] (0xc002a2cf00) (3) Data frame handling I0509 12:28:03.335671 6 log.go:172] (0xc002a2cf00) (3) Data frame sent I0509 12:28:03.335697 6 log.go:172] (0xc000db1ad0) Data frame received for 3 I0509 12:28:03.335719 6 log.go:172] (0xc002a2cf00) (3) Data frame handling I0509 12:28:03.335856 6 log.go:172] (0xc000db1ad0) Data frame received for 5 I0509 12:28:03.335888 6 log.go:172] (0xc0021774a0) (5) Data frame handling I0509 12:28:03.338128 6 log.go:172] (0xc000db1ad0) Data frame received for 1 I0509 12:28:03.338216 6 log.go:172] (0xc001c57720) (1) Data frame handling I0509 12:28:03.338299 6 log.go:172] (0xc001c57720) (1) Data frame sent I0509 12:28:03.338327 6 log.go:172] (0xc000db1ad0) (0xc001c57720) Stream removed, broadcasting: 1 I0509 12:28:03.338358 6 log.go:172] (0xc000db1ad0) Go away received I0509 12:28:03.338508 6 log.go:172] (0xc000db1ad0) (0xc001c57720) Stream removed, broadcasting: 1 I0509 12:28:03.338564 6 log.go:172] (0xc000db1ad0) (0xc002a2cf00) Stream removed, broadcasting: 3 I0509 12:28:03.338589 6 log.go:172] (0xc000db1ad0) (0xc0021774a0) Stream removed, broadcasting: 5 May 9 12:28:03.338: INFO: Found all expected endpoints: [netserver-0] May 9 12:28:03.342: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.25 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-npsl6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 9 12:28:03.342: INFO: >>> kubeConfig: /root/.kube/config I0509 12:28:03.379073 6 log.go:172] (0xc0000ebe40) (0xc002177680) Create stream I0509 12:28:03.379114 6 log.go:172] (0xc0000ebe40) (0xc002177680) Stream added, broadcasting: 1 I0509 12:28:03.381383 6 log.go:172] (0xc0000ebe40) Reply frame received for 1 I0509 12:28:03.381424 6 log.go:172] (0xc0000ebe40) (0xc001c57900) Create stream I0509 12:28:03.381468 6 log.go:172] (0xc0000ebe40) (0xc001c57900) Stream added, broadcasting: 3 I0509 12:28:03.382477 6 log.go:172] (0xc0000ebe40) Reply frame received for 3 I0509 12:28:03.382518 6 log.go:172] (0xc0000ebe40) (0xc002834500) Create stream I0509 12:28:03.382542 6 log.go:172] (0xc0000ebe40) (0xc002834500) Stream added, broadcasting: 5 I0509 12:28:03.383526 6 log.go:172] (0xc0000ebe40) Reply frame received for 5 I0509 12:28:04.472932 6 log.go:172] (0xc0000ebe40) Data frame received for 3 I0509 12:28:04.472981 6 log.go:172] (0xc001c57900) (3) Data frame handling I0509 12:28:04.473007 6 log.go:172] (0xc001c57900) (3) Data frame sent I0509 12:28:04.473047 6 log.go:172] (0xc0000ebe40) Data frame received for 5 I0509 12:28:04.473311 6 log.go:172] (0xc002834500) (5) Data frame handling I0509 12:28:04.473371 6 log.go:172] (0xc0000ebe40) Data frame received for 3 I0509 12:28:04.473400 6 log.go:172] (0xc001c57900) (3) Data frame handling I0509 12:28:04.475619 6 log.go:172] (0xc0000ebe40) Data frame received for 1 I0509 12:28:04.475660 6 log.go:172] (0xc002177680) (1) Data frame handling I0509 12:28:04.475685 6 log.go:172] (0xc002177680) (1) Data frame sent I0509 12:28:04.475708 6 log.go:172] (0xc0000ebe40) (0xc002177680) Stream removed, broadcasting: 1 I0509 12:28:04.475841 6 log.go:172] (0xc0000ebe40) (0xc002177680) Stream removed, broadcasting: 1 I0509 12:28:04.475865 6 log.go:172] (0xc0000ebe40) (0xc001c57900) Stream removed, broadcasting: 3 I0509 12:28:04.475901 6 log.go:172] (0xc0000ebe40) (0xc002834500) Stream removed, broadcasting: 5 I0509 12:28:04.475935 6 log.go:172] (0xc0000ebe40) Go away received May 9 12:28:04.475: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:28:04.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-npsl6" for this suite. May 9 12:28:16.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:28:16.563: INFO: namespace: e2e-tests-pod-network-test-npsl6, resource: bindings, ignored listing per whitelist May 9 12:28:16.570: INFO: namespace e2e-tests-pod-network-test-npsl6 deletion completed in 12.089378237s • [SLOW TEST:38.596 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:28:16.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 9 12:28:16.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-ggg4h run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 9 12:28:19.548: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0509 12:28:19.476336 3036 log.go:172] (0xc0009100b0) (0xc00072c140) Create stream\nI0509 12:28:19.476410 3036 log.go:172] (0xc0009100b0) (0xc00072c140) Stream added, broadcasting: 1\nI0509 12:28:19.479072 3036 log.go:172] (0xc0009100b0) Reply frame received for 1\nI0509 12:28:19.479126 3036 log.go:172] (0xc0009100b0) (0xc00072c1e0) Create stream\nI0509 12:28:19.479141 3036 log.go:172] (0xc0009100b0) (0xc00072c1e0) Stream added, broadcasting: 3\nI0509 12:28:19.480106 3036 log.go:172] (0xc0009100b0) Reply frame received for 3\nI0509 12:28:19.480155 3036 log.go:172] (0xc0009100b0) (0xc00072c280) Create stream\nI0509 12:28:19.480169 3036 log.go:172] (0xc0009100b0) (0xc00072c280) Stream added, broadcasting: 5\nI0509 12:28:19.490184 3036 log.go:172] (0xc0009100b0) Reply frame received for 5\nI0509 12:28:19.490228 3036 log.go:172] (0xc0009100b0) (0xc000878000) Create stream\nI0509 12:28:19.490237 3036 log.go:172] (0xc0009100b0) (0xc000878000) Stream added, broadcasting: 7\nI0509 12:28:19.491071 3036 log.go:172] (0xc0009100b0) Reply frame received for 7\nI0509 12:28:19.491262 3036 log.go:172] (0xc00072c1e0) (3) Writing data frame\nI0509 12:28:19.491371 3036 log.go:172] (0xc00072c1e0) (3) Writing data frame\nI0509 12:28:19.492104 3036 log.go:172] (0xc0009100b0) Data frame received for 5\nI0509 12:28:19.492120 3036 log.go:172] (0xc00072c280) (5) Data frame handling\nI0509 12:28:19.492133 3036 log.go:172] (0xc00072c280) (5) Data frame sent\nI0509 12:28:19.492736 3036 log.go:172] (0xc0009100b0) Data frame received for 5\nI0509 12:28:19.492749 3036 log.go:172] (0xc00072c280) (5) Data frame handling\nI0509 12:28:19.492758 3036 log.go:172] (0xc00072c280) (5) Data frame sent\nI0509 12:28:19.523657 3036 log.go:172] (0xc0009100b0) Data frame received for 7\nI0509 12:28:19.523697 3036 log.go:172] (0xc0009100b0) Data frame received for 5\nI0509 12:28:19.523724 3036 log.go:172] (0xc00072c280) (5) Data frame handling\nI0509 12:28:19.523758 3036 log.go:172] (0xc000878000) (7) Data frame handling\nI0509 12:28:19.524217 3036 log.go:172] (0xc0009100b0) Data frame received for 1\nI0509 12:28:19.524244 3036 log.go:172] (0xc00072c140) (1) Data frame handling\nI0509 12:28:19.524258 3036 log.go:172] (0xc00072c140) (1) Data frame sent\nI0509 12:28:19.524409 3036 log.go:172] (0xc0009100b0) (0xc00072c1e0) Stream removed, broadcasting: 3\nI0509 12:28:19.524454 3036 log.go:172] (0xc0009100b0) (0xc00072c140) Stream removed, broadcasting: 1\nI0509 12:28:19.524514 3036 log.go:172] (0xc0009100b0) Go away received\nI0509 12:28:19.524563 3036 log.go:172] (0xc0009100b0) (0xc00072c140) Stream removed, broadcasting: 1\nI0509 12:28:19.524595 3036 log.go:172] (0xc0009100b0) (0xc00072c1e0) Stream removed, broadcasting: 3\nI0509 12:28:19.524606 3036 log.go:172] (0xc0009100b0) (0xc00072c280) Stream removed, broadcasting: 5\nI0509 12:28:19.524616 3036 log.go:172] (0xc0009100b0) (0xc000878000) Stream removed, broadcasting: 7\n" May 9 12:28:19.548: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:28:21.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ggg4h" for this suite. May 9 12:28:33.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:28:33.681: INFO: namespace: e2e-tests-kubectl-ggg4h, resource: bindings, ignored listing per whitelist May 9 12:28:33.688: INFO: namespace e2e-tests-kubectl-ggg4h deletion completed in 12.122367991s • [SLOW TEST:17.118 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:28:33.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 9 12:28:33.792: INFO: Waiting up to 5m0s for pod "pod-994443ea-91f0-11ea-a20c-0242ac110018" in namespace "e2e-tests-emptydir-kndxx" to be "success or failure" May 9 12:28:33.797: INFO: Pod "pod-994443ea-91f0-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.478436ms May 9 12:28:35.801: INFO: Pod "pod-994443ea-91f0-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008481315s May 9 12:28:37.805: INFO: Pod "pod-994443ea-91f0-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012106254s STEP: Saw pod success May 9 12:28:37.805: INFO: Pod "pod-994443ea-91f0-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:28:37.808: INFO: Trying to get logs from node hunter-worker2 pod pod-994443ea-91f0-11ea-a20c-0242ac110018 container test-container: STEP: delete the pod May 9 12:28:37.871: INFO: Waiting for pod pod-994443ea-91f0-11ea-a20c-0242ac110018 to disappear May 9 12:28:37.950: INFO: Pod pod-994443ea-91f0-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:28:37.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kndxx" for this suite. May 9 12:28:43.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:28:44.045: INFO: namespace: e2e-tests-emptydir-kndxx, resource: bindings, ignored listing per whitelist May 9 12:28:44.076: INFO: namespace e2e-tests-emptydir-kndxx deletion completed in 6.121838059s • [SLOW TEST:10.388 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:28:44.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 12:28:44.280: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9f77c92f-91f0-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0017d5ef2), BlockOwnerDeletion:(*bool)(0xc0017d5ef3)}} May 9 12:28:44.306: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"9f7686f6-91f0-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0018e633a), BlockOwnerDeletion:(*bool)(0xc0018e633b)}} May 9 12:28:44.322: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9f770d3d-91f0-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00132435a), BlockOwnerDeletion:(*bool)(0xc00132435b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:28:49.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-2wv5v" for this suite. May 9 12:28:55.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:28:55.460: INFO: namespace: e2e-tests-gc-2wv5v, resource: bindings, ignored listing per whitelist May 9 12:28:55.466: INFO: namespace e2e-tests-gc-2wv5v deletion completed in 6.092631787s • [SLOW TEST:11.389 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:28:55.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-a63ef0a0-91f0-11ea-a20c-0242ac110018 STEP: Creating a pod to test consume configMaps May 9 12:28:55.584: INFO: Waiting up to 5m0s for pod "pod-configmaps-a640786c-91f0-11ea-a20c-0242ac110018" in namespace "e2e-tests-configmap-7gstl" to be "success or failure" May 9 12:28:55.615: INFO: Pod "pod-configmaps-a640786c-91f0-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 30.87709ms May 9 12:28:57.620: INFO: Pod "pod-configmaps-a640786c-91f0-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035488059s May 9 12:28:59.624: INFO: Pod "pod-configmaps-a640786c-91f0-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040206028s STEP: Saw pod success May 9 12:28:59.624: INFO: Pod "pod-configmaps-a640786c-91f0-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:28:59.628: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-a640786c-91f0-11ea-a20c-0242ac110018 container configmap-volume-test: STEP: delete the pod May 9 12:28:59.663: INFO: Waiting for pod pod-configmaps-a640786c-91f0-11ea-a20c-0242ac110018 to disappear May 9 12:28:59.693: INFO: Pod pod-configmaps-a640786c-91f0-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:28:59.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7gstl" for this suite. May 9 12:29:05.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:29:05.778: INFO: namespace: e2e-tests-configmap-7gstl, resource: bindings, ignored listing per whitelist May 9 12:29:05.781: INFO: namespace e2e-tests-configmap-7gstl deletion completed in 6.084286978s • [SLOW TEST:10.315 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:29:05.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 9 12:29:05.887: INFO: Waiting up to 5m0s for pod "downward-api-ac645dd5-91f0-11ea-a20c-0242ac110018" in namespace "e2e-tests-downward-api-crf9w" to be "success or failure" May 9 12:29:05.926: INFO: Pod "downward-api-ac645dd5-91f0-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 39.325904ms May 9 12:29:07.931: INFO: Pod "downward-api-ac645dd5-91f0-11ea-a20c-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043664169s May 9 12:29:09.935: INFO: Pod "downward-api-ac645dd5-91f0-11ea-a20c-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047644424s STEP: Saw pod success May 9 12:29:09.935: INFO: Pod "downward-api-ac645dd5-91f0-11ea-a20c-0242ac110018" satisfied condition "success or failure" May 9 12:29:09.938: INFO: Trying to get logs from node hunter-worker2 pod downward-api-ac645dd5-91f0-11ea-a20c-0242ac110018 container dapi-container: STEP: delete the pod May 9 12:29:09.955: INFO: Waiting for pod downward-api-ac645dd5-91f0-11ea-a20c-0242ac110018 to disappear May 9 12:29:09.959: INFO: Pod downward-api-ac645dd5-91f0-11ea-a20c-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:29:09.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-crf9w" for this suite. May 9 12:29:15.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:29:16.052: INFO: namespace: e2e-tests-downward-api-crf9w, resource: bindings, ignored listing per whitelist May 9 12:29:16.059: INFO: namespace e2e-tests-downward-api-crf9w deletion completed in 6.097240312s • [SLOW TEST:10.278 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:29:16.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 9 12:29:16.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 9 12:29:16.224: INFO: stderr: "" May 9 12:29:16.224: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 9 12:29:16.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dkc7f' May 9 12:29:16.493: INFO: stderr: "" May 9 12:29:16.493: INFO: stdout: "replicationcontroller/redis-master created\n" May 9 12:29:16.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dkc7f' May 9 12:29:16.862: INFO: stderr: "" May 9 12:29:16.862: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 9 12:29:17.866: INFO: Selector matched 1 pods for map[app:redis] May 9 12:29:17.866: INFO: Found 0 / 1 May 9 12:29:18.867: INFO: Selector matched 1 pods for map[app:redis] May 9 12:29:18.867: INFO: Found 0 / 1 May 9 12:29:19.866: INFO: Selector matched 1 pods for map[app:redis] May 9 12:29:19.866: INFO: Found 1 / 1 May 9 12:29:19.866: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 9 12:29:19.868: INFO: Selector matched 1 pods for map[app:redis] May 9 12:29:19.868: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 9 12:29:19.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-5fhgt --namespace=e2e-tests-kubectl-dkc7f' May 9 12:29:19.998: INFO: stderr: "" May 9 12:29:19.998: INFO: stdout: "Name: redis-master-5fhgt\nNamespace: e2e-tests-kubectl-dkc7f\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.17.0.3\nStart Time: Sat, 09 May 2020 12:29:16 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.192\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://85939a34e67535e5d414a459828ee10a67ca5201bccba1a79bb380df04bb152d\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 09 May 2020 12:29:19 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-n5gs9 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-n5gs9:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-n5gs9\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned e2e-tests-kubectl-dkc7f/redis-master-5fhgt to hunter-worker\n Normal Pulled 2s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker Created container\n Normal Started 0s kubelet, hunter-worker Started container\n" May 9 12:29:19.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-dkc7f' May 9 12:29:20.127: INFO: stderr: "" May 9 12:29:20.127: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-dkc7f\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-5fhgt\n" May 9 12:29:20.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-dkc7f' May 9 12:29:20.267: INFO: stderr: "" May 9 12:29:20.267: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-dkc7f\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.104.169.70\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.192:6379\nSession Affinity: None\nEvents: \n" May 9 12:29:20.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 9 12:29:20.405: INFO: stderr: "" May 9 12:29:20.405: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 09 May 2020 12:29:12 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 09 May 2020 12:29:12 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 09 May 2020 12:29:12 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 09 May 2020 12:29:12 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 54d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 54d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 54d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 54d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 54d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 54d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 54d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 9 12:29:20.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-dkc7f' May 9 12:29:20.527: INFO: stderr: "" May 9 12:29:20.527: INFO: stdout: "Name: e2e-tests-kubectl-dkc7f\nLabels: e2e-framework=kubectl\n e2e-run=60ac9bb2-91e2-11ea-a20c-0242ac110018\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:29:20.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dkc7f" for this suite. May 9 12:29:42.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:29:42.622: INFO: namespace: e2e-tests-kubectl-dkc7f, resource: bindings, ignored listing per whitelist May 9 12:29:42.638: INFO: namespace e2e-tests-kubectl-dkc7f deletion completed in 22.107216646s • [SLOW TEST:26.579 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:29:42.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:29:46.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-gdwnn" for this suite. May 9 12:29:52.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:29:52.838: INFO: namespace: e2e-tests-kubelet-test-gdwnn, resource: bindings, ignored listing per whitelist May 9 12:29:52.878: INFO: namespace e2e-tests-kubelet-test-gdwnn deletion completed in 6.088868765s • [SLOW TEST:10.240 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 9 12:29:52.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 9 12:29:57.502: INFO: Successfully updated pod "labelsupdatec875025e-91f0-11ea-a20c-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 9 12:29:59.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kzzzr" for this suite. May 9 12:30:21.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 9 12:30:21.574: INFO: namespace: e2e-tests-downward-api-kzzzr, resource: bindings, ignored listing per whitelist May 9 12:30:21.647: INFO: namespace e2e-tests-downward-api-kzzzr deletion completed in 22.125365851s • [SLOW TEST:28.769 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSMay 9 12:30:21.647: INFO: Running AfterSuite actions on all nodes May 9 12:30:21.647: INFO: Running AfterSuite actions on node 1 May 9 12:30:21.647: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6215.104 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS