I0122 10:47:15.435527 8 e2e.go:224] Starting e2e run "8cefcd74-3d04-11ea-ad91-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579690034 - Will randomize all specs Will run 201 of 2164 specs Jan 22 10:47:16.180: INFO: >>> kubeConfig: /root/.kube/config Jan 22 10:47:16.186: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 22 10:47:16.210: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 22 10:47:16.250: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 22 10:47:16.250: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 22 10:47:16.250: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 22 10:47:16.273: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 22 10:47:16.273: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 22 10:47:16.273: INFO: e2e test version: v1.13.12 Jan 22 10:47:16.277: INFO: kube-apiserver version: v1.13.8 SSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:47:16.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency Jan 22 10:47:16.456: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-m9s9s I0122 10:47:16.482940 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-m9s9s, replica count: 1 I0122 10:47:17.533889 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 10:47:18.534660 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 10:47:19.535207 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 10:47:20.536073 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 10:47:21.536528 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 10:47:22.537125 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 10:47:23.537650 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 10:47:24.538875 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 10:47:25.539545 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 22 10:47:25.720: INFO: Created: latency-svc-v9p6q Jan 22 10:47:25.859: INFO: Got endpoints: latency-svc-v9p6q [219.937703ms] Jan 22 10:47:26.004: INFO: Created: latency-svc-227s5 Jan 22 10:47:26.018: INFO: Got endpoints: latency-svc-227s5 [156.218974ms] Jan 22 10:47:26.050: INFO: Created: latency-svc-dwhqh Jan 22 10:47:26.072: INFO: Got endpoints: latency-svc-dwhqh [211.145836ms] Jan 22 10:47:26.279: INFO: Created: latency-svc-lbfkd Jan 22 10:47:26.351: INFO: Got endpoints: latency-svc-lbfkd [488.81205ms] Jan 22 10:47:26.600: INFO: Created: latency-svc-pqd8z Jan 22 10:47:26.658: INFO: Got endpoints: latency-svc-pqd8z [797.702684ms] Jan 22 10:47:26.889: INFO: Created: latency-svc-kzk9c Jan 22 10:47:26.906: INFO: Got endpoints: latency-svc-kzk9c [1.045341017s] Jan 22 10:47:26.983: INFO: Created: latency-svc-76jvr Jan 22 10:47:27.066: INFO: Got endpoints: latency-svc-76jvr [1.203463235s] Jan 22 10:47:27.094: INFO: Created: latency-svc-sr9mj Jan 22 10:47:27.102: INFO: Got endpoints: latency-svc-sr9mj [1.24207794s] Jan 22 10:47:27.155: INFO: Created: latency-svc-4xz85 Jan 22 10:47:27.293: INFO: Got endpoints: latency-svc-4xz85 [1.430998381s] Jan 22 10:47:27.305: INFO: Created: latency-svc-9xcvt Jan 22 10:47:27.328: INFO: Got endpoints: latency-svc-9xcvt [1.465944589s] Jan 22 10:47:27.374: INFO: Created: latency-svc-twj8k Jan 22 10:47:27.401: INFO: Got endpoints: latency-svc-twj8k [1.539736521s] Jan 22 10:47:27.589: INFO: Created: latency-svc-bv4zx Jan 22 10:47:27.601: INFO: Got endpoints: latency-svc-bv4zx [1.740826769s] Jan 22 10:47:27.765: INFO: Created: latency-svc-ndvm5 Jan 22 10:47:27.770: INFO: Got endpoints: latency-svc-ndvm5 [1.909175424s] Jan 22 10:47:28.164: INFO: Created: latency-svc-tr9hd Jan 22 10:47:28.204: INFO: Got endpoints: latency-svc-tr9hd [2.341757883s] Jan 22 10:47:28.404: INFO: Created: latency-svc-56txh Jan 22 10:47:28.470: INFO: Got endpoints: latency-svc-56txh [2.607201917s] Jan 22 10:47:28.490: INFO: Created: latency-svc-zzfg7 Jan 22 10:47:28.554: INFO: Got endpoints: latency-svc-zzfg7 [2.692623994s] Jan 22 10:47:28.677: INFO: Created: latency-svc-8qwbb Jan 22 10:47:28.866: INFO: Got endpoints: latency-svc-8qwbb [2.847401244s] Jan 22 10:47:28.910: INFO: Created: latency-svc-pkzwb Jan 22 10:47:28.935: INFO: Got endpoints: latency-svc-pkzwb [2.86264943s] Jan 22 10:47:29.127: INFO: Created: latency-svc-jht7l Jan 22 10:47:29.139: INFO: Got endpoints: latency-svc-jht7l [2.787320467s] Jan 22 10:47:29.196: INFO: Created: latency-svc-pmhdc Jan 22 10:47:29.351: INFO: Got endpoints: latency-svc-pmhdc [2.692869436s] Jan 22 10:47:29.422: INFO: Created: latency-svc-9vj4k Jan 22 10:47:29.430: INFO: Got endpoints: latency-svc-9vj4k [2.523675649s] Jan 22 10:47:29.601: INFO: Created: latency-svc-7lnft Jan 22 10:47:29.602: INFO: Got endpoints: latency-svc-7lnft [2.535739754s] Jan 22 10:47:29.761: INFO: Created: latency-svc-jn4zb Jan 22 10:47:29.845: INFO: Created: latency-svc-swv9c Jan 22 10:47:29.856: INFO: Got endpoints: latency-svc-jn4zb [2.75412771s] Jan 22 10:47:29.945: INFO: Got endpoints: latency-svc-swv9c [2.652179682s] Jan 22 10:47:29.985: INFO: Created: latency-svc-vvjw5 Jan 22 10:47:30.010: INFO: Got endpoints: latency-svc-vvjw5 [2.681424494s] Jan 22 10:47:30.124: INFO: Created: latency-svc-t57bl Jan 22 10:47:30.138: INFO: Got endpoints: latency-svc-t57bl [2.737304512s] Jan 22 10:47:30.189: INFO: Created: latency-svc-nq9ln Jan 22 10:47:30.434: INFO: Got endpoints: latency-svc-nq9ln [2.83257268s] Jan 22 10:47:30.468: INFO: Created: latency-svc-bwkhn Jan 22 10:47:30.512: INFO: Got endpoints: latency-svc-bwkhn [2.741036316s] Jan 22 10:47:30.683: INFO: Created: latency-svc-9plzp Jan 22 10:47:30.706: INFO: Got endpoints: latency-svc-9plzp [2.501736399s] Jan 22 10:47:30.958: INFO: Created: latency-svc-hm9lg Jan 22 10:47:30.974: INFO: Got endpoints: latency-svc-hm9lg [2.503472286s] Jan 22 10:47:31.257: INFO: Created: latency-svc-p55fr Jan 22 10:47:31.301: INFO: Got endpoints: latency-svc-p55fr [2.746063689s] Jan 22 10:47:31.561: INFO: Created: latency-svc-cs8w6 Jan 22 10:47:31.581: INFO: Got endpoints: latency-svc-cs8w6 [2.715247732s] Jan 22 10:47:31.722: INFO: Created: latency-svc-r4x28 Jan 22 10:47:31.747: INFO: Got endpoints: latency-svc-r4x28 [2.811824683s] Jan 22 10:47:31.808: INFO: Created: latency-svc-zkc5l Jan 22 10:47:31.969: INFO: Got endpoints: latency-svc-zkc5l [2.830055696s] Jan 22 10:47:32.000: INFO: Created: latency-svc-z7nhl Jan 22 10:47:32.033: INFO: Got endpoints: latency-svc-z7nhl [2.681819112s] Jan 22 10:47:32.181: INFO: Created: latency-svc-pzmr4 Jan 22 10:47:32.223: INFO: Got endpoints: latency-svc-pzmr4 [2.79313472s] Jan 22 10:47:32.423: INFO: Created: latency-svc-lvjs2 Jan 22 10:47:32.466: INFO: Got endpoints: latency-svc-lvjs2 [2.86364393s] Jan 22 10:47:32.668: INFO: Created: latency-svc-szr2w Jan 22 10:47:32.690: INFO: Got endpoints: latency-svc-szr2w [2.833125402s] Jan 22 10:47:32.756: INFO: Created: latency-svc-ntx9k Jan 22 10:47:32.840: INFO: Got endpoints: latency-svc-ntx9k [2.893995323s] Jan 22 10:47:32.917: INFO: Created: latency-svc-2gwks Jan 22 10:47:32.921: INFO: Got endpoints: latency-svc-2gwks [2.910840422s] Jan 22 10:47:33.033: INFO: Created: latency-svc-j6hv2 Jan 22 10:47:33.035: INFO: Got endpoints: latency-svc-j6hv2 [2.8963382s] Jan 22 10:47:33.085: INFO: Created: latency-svc-pfmbp Jan 22 10:47:33.090: INFO: Got endpoints: latency-svc-pfmbp [2.655541061s] Jan 22 10:47:33.294: INFO: Created: latency-svc-m7snq Jan 22 10:47:33.341: INFO: Got endpoints: latency-svc-m7snq [2.828803067s] Jan 22 10:47:33.449: INFO: Created: latency-svc-k7qr9 Jan 22 10:47:33.453: INFO: Got endpoints: latency-svc-k7qr9 [2.74703867s] Jan 22 10:47:33.613: INFO: Created: latency-svc-52xbp Jan 22 10:47:33.653: INFO: Got endpoints: latency-svc-52xbp [2.678884833s] Jan 22 10:47:33.860: INFO: Created: latency-svc-b5bp9 Jan 22 10:47:33.873: INFO: Got endpoints: latency-svc-b5bp9 [2.572256144s] Jan 22 10:47:34.029: INFO: Created: latency-svc-lckx9 Jan 22 10:47:34.047: INFO: Got endpoints: latency-svc-lckx9 [2.465269723s] Jan 22 10:47:34.122: INFO: Created: latency-svc-7f4dl Jan 22 10:47:34.225: INFO: Got endpoints: latency-svc-7f4dl [2.477315637s] Jan 22 10:47:34.260: INFO: Created: latency-svc-6rb4c Jan 22 10:47:34.276: INFO: Got endpoints: latency-svc-6rb4c [2.306693412s] Jan 22 10:47:34.313: INFO: Created: latency-svc-dn6dz Jan 22 10:47:34.450: INFO: Got endpoints: latency-svc-dn6dz [2.416154669s] Jan 22 10:47:34.594: INFO: Created: latency-svc-9nhgv Jan 22 10:47:34.682: INFO: Got endpoints: latency-svc-9nhgv [2.45843025s] Jan 22 10:47:34.719: INFO: Created: latency-svc-rwjh9 Jan 22 10:47:34.727: INFO: Got endpoints: latency-svc-rwjh9 [2.260895584s] Jan 22 10:47:34.767: INFO: Created: latency-svc-b5mfc Jan 22 10:47:34.872: INFO: Got endpoints: latency-svc-b5mfc [2.182110589s] Jan 22 10:47:34.937: INFO: Created: latency-svc-lgz7v Jan 22 10:47:34.961: INFO: Got endpoints: latency-svc-lgz7v [2.121289362s] Jan 22 10:47:35.296: INFO: Created: latency-svc-84jjs Jan 22 10:47:35.435: INFO: Got endpoints: latency-svc-84jjs [2.513518055s] Jan 22 10:47:35.475: INFO: Created: latency-svc-hm6vl Jan 22 10:47:35.501: INFO: Got endpoints: latency-svc-hm6vl [2.466473019s] Jan 22 10:47:35.718: INFO: Created: latency-svc-wl5md Jan 22 10:47:35.754: INFO: Got endpoints: latency-svc-wl5md [2.664224097s] Jan 22 10:47:35.901: INFO: Created: latency-svc-5bmnd Jan 22 10:47:36.075: INFO: Created: latency-svc-9rg9p Jan 22 10:47:36.097: INFO: Got endpoints: latency-svc-5bmnd [2.755815669s] Jan 22 10:47:36.112: INFO: Got endpoints: latency-svc-9rg9p [2.658475893s] Jan 22 10:47:36.171: INFO: Created: latency-svc-dc8pm Jan 22 10:47:36.297: INFO: Got endpoints: latency-svc-dc8pm [2.643601793s] Jan 22 10:47:36.545: INFO: Created: latency-svc-lmxn7 Jan 22 10:47:36.546: INFO: Got endpoints: latency-svc-lmxn7 [2.672299948s] Jan 22 10:47:36.615: INFO: Created: latency-svc-8hw5v Jan 22 10:47:36.682: INFO: Got endpoints: latency-svc-8hw5v [2.635032724s] Jan 22 10:47:36.750: INFO: Created: latency-svc-6rjc6 Jan 22 10:47:36.980: INFO: Got endpoints: latency-svc-6rjc6 [2.755861063s] Jan 22 10:47:37.020: INFO: Created: latency-svc-66bfd Jan 22 10:47:37.042: INFO: Got endpoints: latency-svc-66bfd [2.766171868s] Jan 22 10:47:37.156: INFO: Created: latency-svc-q9lwl Jan 22 10:47:37.189: INFO: Got endpoints: latency-svc-q9lwl [2.738407662s] Jan 22 10:47:37.282: INFO: Created: latency-svc-n657f Jan 22 10:47:37.333: INFO: Got endpoints: latency-svc-n657f [2.651107072s] Jan 22 10:47:37.413: INFO: Created: latency-svc-xs7nh Jan 22 10:47:37.422: INFO: Got endpoints: latency-svc-xs7nh [2.695304324s] Jan 22 10:47:37.533: INFO: Created: latency-svc-hphj8 Jan 22 10:47:37.573: INFO: Got endpoints: latency-svc-hphj8 [2.700840582s] Jan 22 10:47:37.687: INFO: Created: latency-svc-ksl2x Jan 22 10:47:37.712: INFO: Got endpoints: latency-svc-ksl2x [2.75077146s] Jan 22 10:47:37.759: INFO: Created: latency-svc-nrk8s Jan 22 10:47:37.839: INFO: Got endpoints: latency-svc-nrk8s [2.403835916s] Jan 22 10:47:37.877: INFO: Created: latency-svc-r7rbw Jan 22 10:47:37.902: INFO: Got endpoints: latency-svc-r7rbw [2.400369619s] Jan 22 10:47:38.015: INFO: Created: latency-svc-8lj6z Jan 22 10:47:38.031: INFO: Got endpoints: latency-svc-8lj6z [2.276385051s] Jan 22 10:47:38.081: INFO: Created: latency-svc-tmzkb Jan 22 10:47:38.166: INFO: Got endpoints: latency-svc-tmzkb [2.069318821s] Jan 22 10:47:38.196: INFO: Created: latency-svc-ftsln Jan 22 10:47:38.222: INFO: Got endpoints: latency-svc-ftsln [2.110349331s] Jan 22 10:47:38.334: INFO: Created: latency-svc-dllkd Jan 22 10:47:38.339: INFO: Got endpoints: latency-svc-dllkd [2.041652084s] Jan 22 10:47:38.416: INFO: Created: latency-svc-qc485 Jan 22 10:47:38.539: INFO: Got endpoints: latency-svc-qc485 [1.993019833s] Jan 22 10:47:38.601: INFO: Created: latency-svc-gq6z7 Jan 22 10:47:38.766: INFO: Got endpoints: latency-svc-gq6z7 [2.084341732s] Jan 22 10:47:38.803: INFO: Created: latency-svc-rcspm Jan 22 10:47:38.803: INFO: Got endpoints: latency-svc-rcspm [1.822295649s] Jan 22 10:47:38.844: INFO: Created: latency-svc-hqmf4 Jan 22 10:47:38.973: INFO: Got endpoints: latency-svc-hqmf4 [1.930348391s] Jan 22 10:47:39.033: INFO: Created: latency-svc-bm2s8 Jan 22 10:47:39.040: INFO: Got endpoints: latency-svc-bm2s8 [1.851373668s] Jan 22 10:47:39.177: INFO: Created: latency-svc-b6m64 Jan 22 10:47:39.227: INFO: Got endpoints: latency-svc-b6m64 [1.893692561s] Jan 22 10:47:39.349: INFO: Created: latency-svc-m5jxh Jan 22 10:47:39.360: INFO: Got endpoints: latency-svc-m5jxh [1.937241957s] Jan 22 10:47:39.419: INFO: Created: latency-svc-bffk2 Jan 22 10:47:39.522: INFO: Got endpoints: latency-svc-bffk2 [1.948279717s] Jan 22 10:47:39.556: INFO: Created: latency-svc-9j5q5 Jan 22 10:47:39.564: INFO: Got endpoints: latency-svc-9j5q5 [1.85166734s] Jan 22 10:47:39.631: INFO: Created: latency-svc-4sdbf Jan 22 10:47:39.740: INFO: Got endpoints: latency-svc-4sdbf [1.901029946s] Jan 22 10:47:39.761: INFO: Created: latency-svc-6p6qz Jan 22 10:47:39.777: INFO: Got endpoints: latency-svc-6p6qz [1.874687224s] Jan 22 10:47:39.830: INFO: Created: latency-svc-h6gxd Jan 22 10:47:39.949: INFO: Got endpoints: latency-svc-h6gxd [1.917449249s] Jan 22 10:47:39.971: INFO: Created: latency-svc-hw7kp Jan 22 10:47:39.998: INFO: Got endpoints: latency-svc-hw7kp [1.831117059s] Jan 22 10:47:40.109: INFO: Created: latency-svc-6r2nt Jan 22 10:47:40.121: INFO: Got endpoints: latency-svc-6r2nt [1.898615654s] Jan 22 10:47:40.366: INFO: Created: latency-svc-mp7ms Jan 22 10:47:40.366: INFO: Got endpoints: latency-svc-mp7ms [2.027006872s] Jan 22 10:47:40.441: INFO: Created: latency-svc-wwjzp Jan 22 10:47:40.527: INFO: Got endpoints: latency-svc-wwjzp [1.98841434s] Jan 22 10:47:40.562: INFO: Created: latency-svc-mdtwn Jan 22 10:47:40.720: INFO: Got endpoints: latency-svc-mdtwn [1.953732888s] Jan 22 10:47:40.833: INFO: Created: latency-svc-f79dq Jan 22 10:47:40.899: INFO: Got endpoints: latency-svc-f79dq [2.096245681s] Jan 22 10:47:40.923: INFO: Created: latency-svc-4qdw8 Jan 22 10:47:40.944: INFO: Got endpoints: latency-svc-4qdw8 [223.593319ms] Jan 22 10:47:41.100: INFO: Created: latency-svc-6lplz Jan 22 10:47:41.112: INFO: Got endpoints: latency-svc-6lplz [2.138951411s] Jan 22 10:47:41.182: INFO: Created: latency-svc-lqzd8 Jan 22 10:47:41.305: INFO: Got endpoints: latency-svc-lqzd8 [2.264569705s] Jan 22 10:47:41.320: INFO: Created: latency-svc-xzv6b Jan 22 10:47:41.342: INFO: Got endpoints: latency-svc-xzv6b [2.115026572s] Jan 22 10:47:41.467: INFO: Created: latency-svc-tv7qf Jan 22 10:47:41.488: INFO: Got endpoints: latency-svc-tv7qf [2.128021203s] Jan 22 10:47:41.617: INFO: Created: latency-svc-65c29 Jan 22 10:47:41.629: INFO: Got endpoints: latency-svc-65c29 [2.107148447s] Jan 22 10:47:41.698: INFO: Created: latency-svc-ckqgs Jan 22 10:47:41.710: INFO: Got endpoints: latency-svc-ckqgs [2.145896469s] Jan 22 10:47:41.814: INFO: Created: latency-svc-mxvrz Jan 22 10:47:41.860: INFO: Got endpoints: latency-svc-mxvrz [2.119825956s] Jan 22 10:47:42.055: INFO: Created: latency-svc-cpchd Jan 22 10:47:42.073: INFO: Got endpoints: latency-svc-cpchd [2.295806855s] Jan 22 10:47:42.304: INFO: Created: latency-svc-jt4gq Jan 22 10:47:42.461: INFO: Got endpoints: latency-svc-jt4gq [2.51174773s] Jan 22 10:47:43.469: INFO: Created: latency-svc-9wwk8 Jan 22 10:47:43.604: INFO: Got endpoints: latency-svc-9wwk8 [3.60603378s] Jan 22 10:47:43.762: INFO: Created: latency-svc-jgll7 Jan 22 10:47:43.819: INFO: Got endpoints: latency-svc-jgll7 [3.698200901s] Jan 22 10:47:43.937: INFO: Created: latency-svc-7hnbw Jan 22 10:47:43.982: INFO: Got endpoints: latency-svc-7hnbw [3.616595651s] Jan 22 10:47:44.001: INFO: Created: latency-svc-6qvbs Jan 22 10:47:44.013: INFO: Got endpoints: latency-svc-6qvbs [3.48554992s] Jan 22 10:47:44.076: INFO: Created: latency-svc-hm8q4 Jan 22 10:47:44.196: INFO: Got endpoints: latency-svc-hm8q4 [3.296622017s] Jan 22 10:47:44.227: INFO: Created: latency-svc-tqkqd Jan 22 10:47:44.242: INFO: Got endpoints: latency-svc-tqkqd [3.297583616s] Jan 22 10:47:44.283: INFO: Created: latency-svc-z8shn Jan 22 10:47:44.371: INFO: Got endpoints: latency-svc-z8shn [3.258765875s] Jan 22 10:47:44.400: INFO: Created: latency-svc-7t5x2 Jan 22 10:47:44.407: INFO: Got endpoints: latency-svc-7t5x2 [3.101576934s] Jan 22 10:47:44.453: INFO: Created: latency-svc-cxlvt Jan 22 10:47:44.460: INFO: Got endpoints: latency-svc-cxlvt [3.117874683s] Jan 22 10:47:44.698: INFO: Created: latency-svc-qv4v5 Jan 22 10:47:44.710: INFO: Got endpoints: latency-svc-qv4v5 [3.221959556s] Jan 22 10:47:44.828: INFO: Created: latency-svc-j2slq Jan 22 10:47:44.861: INFO: Got endpoints: latency-svc-j2slq [3.23157031s] Jan 22 10:47:44.995: INFO: Created: latency-svc-hpdmq Jan 22 10:47:44.998: INFO: Got endpoints: latency-svc-hpdmq [3.287283777s] Jan 22 10:47:45.050: INFO: Created: latency-svc-xnxdt Jan 22 10:47:45.147: INFO: Got endpoints: latency-svc-xnxdt [3.28708652s] Jan 22 10:47:45.392: INFO: Created: latency-svc-c2qqd Jan 22 10:47:45.432: INFO: Got endpoints: latency-svc-c2qqd [3.359072953s] Jan 22 10:47:45.580: INFO: Created: latency-svc-cgbcf Jan 22 10:47:45.597: INFO: Got endpoints: latency-svc-cgbcf [3.135698451s] Jan 22 10:47:45.787: INFO: Created: latency-svc-mgtpn Jan 22 10:47:45.822: INFO: Got endpoints: latency-svc-mgtpn [2.217085643s] Jan 22 10:47:45.864: INFO: Created: latency-svc-rw4tz Jan 22 10:47:46.023: INFO: Got endpoints: latency-svc-rw4tz [2.203650443s] Jan 22 10:47:46.055: INFO: Created: latency-svc-rkfws Jan 22 10:47:46.079: INFO: Got endpoints: latency-svc-rkfws [2.096021376s] Jan 22 10:47:46.230: INFO: Created: latency-svc-p2j5j Jan 22 10:47:46.298: INFO: Got endpoints: latency-svc-p2j5j [2.284331096s] Jan 22 10:47:46.303: INFO: Created: latency-svc-nrwmv Jan 22 10:47:46.309: INFO: Got endpoints: latency-svc-nrwmv [2.112333937s] Jan 22 10:47:46.770: INFO: Created: latency-svc-hwvcq Jan 22 10:47:46.772: INFO: Created: latency-svc-nbzw7 Jan 22 10:47:46.802: INFO: Got endpoints: latency-svc-hwvcq [2.430591836s] Jan 22 10:47:46.802: INFO: Got endpoints: latency-svc-nbzw7 [2.559602984s] Jan 22 10:47:46.993: INFO: Created: latency-svc-9z76h Jan 22 10:47:47.007: INFO: Got endpoints: latency-svc-9z76h [2.600654547s] Jan 22 10:47:47.076: INFO: Created: latency-svc-6z8gc Jan 22 10:47:47.138: INFO: Got endpoints: latency-svc-6z8gc [2.678300422s] Jan 22 10:47:47.172: INFO: Created: latency-svc-l92gv Jan 22 10:47:47.179: INFO: Got endpoints: latency-svc-l92gv [2.468668337s] Jan 22 10:47:47.238: INFO: Created: latency-svc-p896p Jan 22 10:47:47.361: INFO: Got endpoints: latency-svc-p896p [2.499523041s] Jan 22 10:47:47.435: INFO: Created: latency-svc-hdcs8 Jan 22 10:47:47.435: INFO: Got endpoints: latency-svc-hdcs8 [2.437303155s] Jan 22 10:47:47.582: INFO: Created: latency-svc-rqnd7 Jan 22 10:47:47.582: INFO: Got endpoints: latency-svc-rqnd7 [2.434848983s] Jan 22 10:47:47.645: INFO: Created: latency-svc-k5cvd Jan 22 10:47:47.724: INFO: Got endpoints: latency-svc-k5cvd [2.291611286s] Jan 22 10:47:47.771: INFO: Created: latency-svc-z8tmr Jan 22 10:47:47.772: INFO: Got endpoints: latency-svc-z8tmr [2.174581447s] Jan 22 10:47:48.024: INFO: Created: latency-svc-45d5f Jan 22 10:47:48.152: INFO: Got endpoints: latency-svc-45d5f [2.330181331s] Jan 22 10:47:48.405: INFO: Created: latency-svc-kxjkj Jan 22 10:47:48.424: INFO: Got endpoints: latency-svc-kxjkj [2.400272447s] Jan 22 10:47:48.476: INFO: Created: latency-svc-f92ct Jan 22 10:47:48.593: INFO: Got endpoints: latency-svc-f92ct [2.514480869s] Jan 22 10:47:48.608: INFO: Created: latency-svc-r4zq6 Jan 22 10:47:48.649: INFO: Got endpoints: latency-svc-r4zq6 [2.351628694s] Jan 22 10:47:48.660: INFO: Created: latency-svc-bdwss Jan 22 10:47:48.675: INFO: Got endpoints: latency-svc-bdwss [2.366687017s] Jan 22 10:47:48.848: INFO: Created: latency-svc-f42tf Jan 22 10:47:48.870: INFO: Got endpoints: latency-svc-f42tf [2.06792514s] Jan 22 10:47:49.047: INFO: Created: latency-svc-pfcw2 Jan 22 10:47:49.076: INFO: Got endpoints: latency-svc-pfcw2 [2.27371107s] Jan 22 10:47:49.181: INFO: Created: latency-svc-qt29g Jan 22 10:47:49.208: INFO: Got endpoints: latency-svc-qt29g [2.200638201s] Jan 22 10:47:49.276: INFO: Created: latency-svc-d4cxw Jan 22 10:47:49.346: INFO: Got endpoints: latency-svc-d4cxw [2.207339714s] Jan 22 10:47:49.444: INFO: Created: latency-svc-6hwb7 Jan 22 10:47:49.548: INFO: Got endpoints: latency-svc-6hwb7 [2.369244409s] Jan 22 10:47:49.576: INFO: Created: latency-svc-zr2l2 Jan 22 10:47:49.732: INFO: Got endpoints: latency-svc-zr2l2 [2.371545274s] Jan 22 10:47:49.767: INFO: Created: latency-svc-djc9b Jan 22 10:47:49.815: INFO: Got endpoints: latency-svc-djc9b [2.379999478s] Jan 22 10:47:49.935: INFO: Created: latency-svc-js28x Jan 22 10:47:49.944: INFO: Got endpoints: latency-svc-js28x [2.361184386s] Jan 22 10:47:50.002: INFO: Created: latency-svc-gsnxw Jan 22 10:47:50.009: INFO: Got endpoints: latency-svc-gsnxw [2.28413233s] Jan 22 10:47:50.144: INFO: Created: latency-svc-mgzm5 Jan 22 10:47:50.160: INFO: Got endpoints: latency-svc-mgzm5 [2.387650084s] Jan 22 10:47:50.417: INFO: Created: latency-svc-mhlbf Jan 22 10:47:50.463: INFO: Got endpoints: latency-svc-mhlbf [2.310845519s] Jan 22 10:47:50.612: INFO: Created: latency-svc-r962f Jan 22 10:47:50.780: INFO: Got endpoints: latency-svc-r962f [2.355954424s] Jan 22 10:47:50.800: INFO: Created: latency-svc-f72l4 Jan 22 10:47:50.844: INFO: Got endpoints: latency-svc-f72l4 [2.250201141s] Jan 22 10:47:51.017: INFO: Created: latency-svc-s7sfz Jan 22 10:47:51.050: INFO: Got endpoints: latency-svc-s7sfz [2.400777274s] Jan 22 10:47:51.185: INFO: Created: latency-svc-977cq Jan 22 10:47:51.238: INFO: Got endpoints: latency-svc-977cq [2.562841812s] Jan 22 10:47:51.357: INFO: Created: latency-svc-xnb2p Jan 22 10:47:51.388: INFO: Got endpoints: latency-svc-xnb2p [2.517523528s] Jan 22 10:47:51.637: INFO: Created: latency-svc-l7wm7 Jan 22 10:47:51.650: INFO: Got endpoints: latency-svc-l7wm7 [2.574485469s] Jan 22 10:47:51.813: INFO: Created: latency-svc-g6jjf Jan 22 10:47:51.816: INFO: Got endpoints: latency-svc-g6jjf [2.607762116s] Jan 22 10:47:51.896: INFO: Created: latency-svc-s9j8r Jan 22 10:47:51.989: INFO: Got endpoints: latency-svc-s9j8r [2.642312341s] Jan 22 10:47:52.026: INFO: Created: latency-svc-t8qm9 Jan 22 10:47:52.044: INFO: Got endpoints: latency-svc-t8qm9 [2.496098406s] Jan 22 10:47:52.228: INFO: Created: latency-svc-qn27q Jan 22 10:47:52.240: INFO: Got endpoints: latency-svc-qn27q [2.507067872s] Jan 22 10:47:52.395: INFO: Created: latency-svc-5xh29 Jan 22 10:47:52.401: INFO: Got endpoints: latency-svc-5xh29 [2.585950967s] Jan 22 10:47:52.463: INFO: Created: latency-svc-qmmsw Jan 22 10:47:52.590: INFO: Got endpoints: latency-svc-qmmsw [2.646265272s] Jan 22 10:47:52.608: INFO: Created: latency-svc-gdjn5 Jan 22 10:47:52.616: INFO: Got endpoints: latency-svc-gdjn5 [2.607109978s] Jan 22 10:47:52.786: INFO: Created: latency-svc-7kf82 Jan 22 10:47:52.819: INFO: Got endpoints: latency-svc-7kf82 [2.658926798s] Jan 22 10:47:52.822: INFO: Created: latency-svc-qstqw Jan 22 10:47:52.834: INFO: Got endpoints: latency-svc-qstqw [2.370979343s] Jan 22 10:47:53.027: INFO: Created: latency-svc-ng2rf Jan 22 10:47:53.057: INFO: Got endpoints: latency-svc-ng2rf [2.276183946s] Jan 22 10:47:53.069: INFO: Created: latency-svc-qgfkk Jan 22 10:47:53.075: INFO: Got endpoints: latency-svc-qgfkk [2.231359684s] Jan 22 10:47:53.239: INFO: Created: latency-svc-xrtnt Jan 22 10:47:53.259: INFO: Got endpoints: latency-svc-xrtnt [2.20843622s] Jan 22 10:47:53.391: INFO: Created: latency-svc-wxmhw Jan 22 10:47:53.399: INFO: Got endpoints: latency-svc-wxmhw [2.160780259s] Jan 22 10:47:53.475: INFO: Created: latency-svc-qbs79 Jan 22 10:47:54.029: INFO: Got endpoints: latency-svc-qbs79 [2.641475278s] Jan 22 10:47:54.072: INFO: Created: latency-svc-m5cgh Jan 22 10:47:54.278: INFO: Got endpoints: latency-svc-m5cgh [2.627239338s] Jan 22 10:47:54.321: INFO: Created: latency-svc-gvsmj Jan 22 10:47:54.356: INFO: Got endpoints: latency-svc-gvsmj [2.53920115s] Jan 22 10:47:54.509: INFO: Created: latency-svc-ld4w9 Jan 22 10:47:54.527: INFO: Got endpoints: latency-svc-ld4w9 [2.537913912s] Jan 22 10:47:54.708: INFO: Created: latency-svc-f6xph Jan 22 10:47:54.708: INFO: Got endpoints: latency-svc-f6xph [2.663565983s] Jan 22 10:47:54.880: INFO: Created: latency-svc-v4qz6 Jan 22 10:47:54.935: INFO: Got endpoints: latency-svc-v4qz6 [2.694816443s] Jan 22 10:47:54.936: INFO: Created: latency-svc-4m25p Jan 22 10:47:55.079: INFO: Got endpoints: latency-svc-4m25p [2.677889054s] Jan 22 10:47:55.106: INFO: Created: latency-svc-d6sb6 Jan 22 10:47:55.149: INFO: Got endpoints: latency-svc-d6sb6 [2.558155004s] Jan 22 10:47:55.355: INFO: Created: latency-svc-plm4c Jan 22 10:47:55.369: INFO: Got endpoints: latency-svc-plm4c [2.75322874s] Jan 22 10:47:55.473: INFO: Created: latency-svc-mxbgn Jan 22 10:47:55.496: INFO: Got endpoints: latency-svc-mxbgn [2.67698108s] Jan 22 10:47:55.523: INFO: Created: latency-svc-pd7fx Jan 22 10:47:55.534: INFO: Got endpoints: latency-svc-pd7fx [2.699225073s] Jan 22 10:47:55.640: INFO: Created: latency-svc-qj7g5 Jan 22 10:47:55.660: INFO: Got endpoints: latency-svc-qj7g5 [2.603399922s] Jan 22 10:47:55.703: INFO: Created: latency-svc-tc8fk Jan 22 10:47:55.706: INFO: Got endpoints: latency-svc-tc8fk [2.63104976s] Jan 22 10:47:55.838: INFO: Created: latency-svc-n6cv5 Jan 22 10:47:55.858: INFO: Got endpoints: latency-svc-n6cv5 [2.598595372s] Jan 22 10:47:55.900: INFO: Created: latency-svc-bjsr9 Jan 22 10:47:55.928: INFO: Got endpoints: latency-svc-bjsr9 [2.52840393s] Jan 22 10:47:55.937: INFO: Created: latency-svc-n777b Jan 22 10:47:56.087: INFO: Got endpoints: latency-svc-n777b [2.057359775s] Jan 22 10:47:56.118: INFO: Created: latency-svc-95m4d Jan 22 10:47:56.135: INFO: Got endpoints: latency-svc-95m4d [1.856841401s] Jan 22 10:47:56.181: INFO: Created: latency-svc-pzjrx Jan 22 10:47:56.261: INFO: Got endpoints: latency-svc-pzjrx [1.905495004s] Jan 22 10:47:56.300: INFO: Created: latency-svc-rvlcd Jan 22 10:47:56.325: INFO: Got endpoints: latency-svc-rvlcd [1.798106904s] Jan 22 10:47:56.522: INFO: Created: latency-svc-vj7c9 Jan 22 10:47:56.522: INFO: Got endpoints: latency-svc-vj7c9 [1.813756512s] Jan 22 10:47:56.623: INFO: Created: latency-svc-jwrnd Jan 22 10:47:56.630: INFO: Got endpoints: latency-svc-jwrnd [1.695203811s] Jan 22 10:47:56.812: INFO: Created: latency-svc-5bwwf Jan 22 10:47:56.833: INFO: Got endpoints: latency-svc-5bwwf [1.752987303s] Jan 22 10:47:56.900: INFO: Created: latency-svc-vlknq Jan 22 10:47:57.159: INFO: Got endpoints: latency-svc-vlknq [2.009901665s] Jan 22 10:47:57.201: INFO: Created: latency-svc-x4wjc Jan 22 10:47:57.217: INFO: Got endpoints: latency-svc-x4wjc [1.847530797s] Jan 22 10:47:57.389: INFO: Created: latency-svc-dx97t Jan 22 10:47:57.408: INFO: Got endpoints: latency-svc-dx97t [1.911775282s] Jan 22 10:47:57.516: INFO: Created: latency-svc-cvv88 Jan 22 10:47:57.577: INFO: Got endpoints: latency-svc-cvv88 [2.043572117s] Jan 22 10:47:57.629: INFO: Created: latency-svc-8cjkh Jan 22 10:47:57.629: INFO: Got endpoints: latency-svc-8cjkh [1.968159522s] Jan 22 10:47:57.683: INFO: Created: latency-svc-zj94f Jan 22 10:47:57.761: INFO: Got endpoints: latency-svc-zj94f [2.054708893s] Jan 22 10:47:57.793: INFO: Created: latency-svc-8c6sd Jan 22 10:47:57.808: INFO: Got endpoints: latency-svc-8c6sd [1.949944734s] Jan 22 10:47:57.981: INFO: Created: latency-svc-mlkrn Jan 22 10:47:57.982: INFO: Got endpoints: latency-svc-mlkrn [2.053399201s] Jan 22 10:47:58.090: INFO: Created: latency-svc-pmm5r Jan 22 10:47:58.129: INFO: Got endpoints: latency-svc-pmm5r [2.041952555s] Jan 22 10:47:58.269: INFO: Created: latency-svc-54rjd Jan 22 10:47:58.271: INFO: Got endpoints: latency-svc-54rjd [2.136673012s] Jan 22 10:47:58.301: INFO: Created: latency-svc-m74hq Jan 22 10:47:58.315: INFO: Got endpoints: latency-svc-m74hq [2.053560977s] Jan 22 10:47:58.315: INFO: Latencies: [156.218974ms 211.145836ms 223.593319ms 488.81205ms 797.702684ms 1.045341017s 1.203463235s 1.24207794s 1.430998381s 1.465944589s 1.539736521s 1.695203811s 1.740826769s 1.752987303s 1.798106904s 1.813756512s 1.822295649s 1.831117059s 1.847530797s 1.851373668s 1.85166734s 1.856841401s 1.874687224s 1.893692561s 1.898615654s 1.901029946s 1.905495004s 1.909175424s 1.911775282s 1.917449249s 1.930348391s 1.937241957s 1.948279717s 1.949944734s 1.953732888s 1.968159522s 1.98841434s 1.993019833s 2.009901665s 2.027006872s 2.041652084s 2.041952555s 2.043572117s 2.053399201s 2.053560977s 2.054708893s 2.057359775s 2.06792514s 2.069318821s 2.084341732s 2.096021376s 2.096245681s 2.107148447s 2.110349331s 2.112333937s 2.115026572s 2.119825956s 2.121289362s 2.128021203s 2.136673012s 2.138951411s 2.145896469s 2.160780259s 2.174581447s 2.182110589s 2.200638201s 2.203650443s 2.207339714s 2.20843622s 2.217085643s 2.231359684s 2.250201141s 2.260895584s 2.264569705s 2.27371107s 2.276183946s 2.276385051s 2.28413233s 2.284331096s 2.291611286s 2.295806855s 2.306693412s 2.310845519s 2.330181331s 2.341757883s 2.351628694s 2.355954424s 2.361184386s 2.366687017s 2.369244409s 2.370979343s 2.371545274s 2.379999478s 2.387650084s 2.400272447s 2.400369619s 2.400777274s 2.403835916s 2.416154669s 2.430591836s 2.434848983s 2.437303155s 2.45843025s 2.465269723s 2.466473019s 2.468668337s 2.477315637s 2.496098406s 2.499523041s 2.501736399s 2.503472286s 2.507067872s 2.51174773s 2.513518055s 2.514480869s 2.517523528s 2.523675649s 2.52840393s 2.535739754s 2.537913912s 2.53920115s 2.558155004s 2.559602984s 2.562841812s 2.572256144s 2.574485469s 2.585950967s 2.598595372s 2.600654547s 2.603399922s 2.607109978s 2.607201917s 2.607762116s 2.627239338s 2.63104976s 2.635032724s 2.641475278s 2.642312341s 2.643601793s 2.646265272s 2.651107072s 2.652179682s 2.655541061s 2.658475893s 2.658926798s 2.663565983s 2.664224097s 2.672299948s 2.67698108s 2.677889054s 2.678300422s 2.678884833s 2.681424494s 2.681819112s 2.692623994s 2.692869436s 2.694816443s 2.695304324s 2.699225073s 2.700840582s 2.715247732s 2.737304512s 2.738407662s 2.741036316s 2.746063689s 2.74703867s 2.75077146s 2.75322874s 2.75412771s 2.755815669s 2.755861063s 2.766171868s 2.787320467s 2.79313472s 2.811824683s 2.828803067s 2.830055696s 2.83257268s 2.833125402s 2.847401244s 2.86264943s 2.86364393s 2.893995323s 2.8963382s 2.910840422s 3.101576934s 3.117874683s 3.135698451s 3.221959556s 3.23157031s 3.258765875s 3.28708652s 3.287283777s 3.296622017s 3.297583616s 3.359072953s 3.48554992s 3.60603378s 3.616595651s 3.698200901s] Jan 22 10:47:58.316: INFO: 50 %ile: 2.434848983s Jan 22 10:47:58.316: INFO: 90 %ile: 2.86264943s Jan 22 10:47:58.316: INFO: 99 %ile: 3.616595651s Jan 22 10:47:58.316: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:47:58.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-m9s9s" for this suite. Jan 22 10:48:48.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:48:48.736: INFO: namespace: e2e-tests-svc-latency-m9s9s, resource: bindings, ignored listing per whitelist Jan 22 10:48:48.742: INFO: namespace e2e-tests-svc-latency-m9s9s deletion completed in 50.405677713s • [SLOW TEST:92.465 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:48:48.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-jdpjx/configmap-test-c578027d-3d04-11ea-ad91-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 22 10:48:49.175: INFO: Waiting up to 5m0s for pod "pod-configmaps-c579a338-3d04-11ea-ad91-0242ac110005" in namespace "e2e-tests-configmap-jdpjx" to be "success or failure" Jan 22 10:48:49.194: INFO: Pod "pod-configmaps-c579a338-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.018644ms Jan 22 10:48:51.206: INFO: Pod "pod-configmaps-c579a338-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031453778s Jan 22 10:48:53.217: INFO: Pod "pod-configmaps-c579a338-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042017767s Jan 22 10:48:55.232: INFO: Pod "pod-configmaps-c579a338-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057292489s Jan 22 10:48:57.350: INFO: Pod "pod-configmaps-c579a338-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174731294s Jan 22 10:48:59.369: INFO: Pod "pod-configmaps-c579a338-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.194357235s Jan 22 10:49:01.386: INFO: Pod "pod-configmaps-c579a338-3d04-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.211296916s STEP: Saw pod success Jan 22 10:49:01.386: INFO: Pod "pod-configmaps-c579a338-3d04-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 10:49:01.397: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c579a338-3d04-11ea-ad91-0242ac110005 container env-test: STEP: delete the pod Jan 22 10:49:01.502: INFO: Waiting for pod pod-configmaps-c579a338-3d04-11ea-ad91-0242ac110005 to disappear Jan 22 10:49:01.512: INFO: Pod pod-configmaps-c579a338-3d04-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:49:01.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jdpjx" for this suite. Jan 22 10:49:07.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:49:07.758: INFO: namespace: e2e-tests-configmap-jdpjx, resource: bindings, ignored listing per whitelist Jan 22 10:49:07.891: INFO: namespace e2e-tests-configmap-jdpjx deletion completed in 6.372082958s • [SLOW TEST:19.149 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:49:07.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 22 10:49:08.116: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 22 10:49:08.126: INFO: Waiting for terminating namespaces to be deleted... Jan 22 10:49:08.138: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Jan 22 10:49:08.158: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Jan 22 10:49:08.159: INFO: Container kube-proxy ready: true, restart count 0 Jan 22 10:49:08.159: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 22 10:49:08.159: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Jan 22 10:49:08.159: INFO: Container weave ready: true, restart count 0 Jan 22 10:49:08.159: INFO: Container weave-npc ready: true, restart count 0 Jan 22 10:49:08.159: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 22 10:49:08.159: INFO: Container coredns ready: true, restart count 0 Jan 22 10:49:08.159: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 22 10:49:08.159: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 22 10:49:08.159: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 22 10:49:08.159: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 22 10:49:08.159: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d6e9df4e-3d04-11ea-ad91-0242ac110005 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-d6e9df4e-3d04-11ea-ad91-0242ac110005 off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label kubernetes.io/e2e-d6e9df4e-3d04-11ea-ad91-0242ac110005 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:49:30.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-724mj" for this suite. Jan 22 10:49:44.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:49:44.951: INFO: namespace: e2e-tests-sched-pred-724mj, resource: bindings, ignored listing per whitelist Jan 22 10:49:44.960: INFO: namespace e2e-tests-sched-pred-724mj deletion completed in 14.235721941s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:37.069 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:49:44.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jan 22 10:49:45.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tdfsn' Jan 22 10:49:47.195: INFO: stderr: "" Jan 22 10:49:47.196: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jan 22 10:49:48.208: INFO: Selector matched 1 pods for map[app:redis] Jan 22 10:49:48.208: INFO: Found 0 / 1 Jan 22 10:49:49.288: INFO: Selector matched 1 pods for map[app:redis] Jan 22 10:49:49.289: INFO: Found 0 / 1 Jan 22 10:49:50.208: INFO: Selector matched 1 pods for map[app:redis] Jan 22 10:49:50.208: INFO: Found 0 / 1 Jan 22 10:49:51.220: INFO: Selector matched 1 pods for map[app:redis] Jan 22 10:49:51.220: INFO: Found 0 / 1 Jan 22 10:49:52.613: INFO: Selector matched 1 pods for map[app:redis] Jan 22 10:49:52.613: INFO: Found 0 / 1 Jan 22 10:49:53.318: INFO: Selector matched 1 pods for map[app:redis] Jan 22 10:49:53.318: INFO: Found 0 / 1 Jan 22 10:49:54.209: INFO: Selector matched 1 pods for map[app:redis] Jan 22 10:49:54.210: INFO: Found 0 / 1 Jan 22 10:49:55.212: INFO: Selector matched 1 pods for map[app:redis] Jan 22 10:49:55.212: INFO: Found 0 / 1 Jan 22 10:49:56.215: INFO: Selector matched 1 pods for map[app:redis] Jan 22 10:49:56.215: INFO: Found 0 / 1 Jan 22 10:49:57.216: INFO: Selector matched 1 pods for map[app:redis] Jan 22 10:49:57.216: INFO: Found 1 / 1 Jan 22 10:49:57.216: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 22 10:49:57.222: INFO: Selector matched 1 pods for map[app:redis] Jan 22 10:49:57.222: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 22 10:49:57.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-lw7sw redis-master --namespace=e2e-tests-kubectl-tdfsn' Jan 22 10:49:57.466: INFO: stderr: "" Jan 22 10:49:57.466: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Jan 10:49:55.152 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Jan 10:49:55.152 # Server started, Redis version 3.2.12\n1:M 22 Jan 10:49:55.153 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Jan 10:49:55.153 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 22 10:49:57.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-lw7sw redis-master --namespace=e2e-tests-kubectl-tdfsn --tail=1' Jan 22 10:49:57.692: INFO: stderr: "" Jan 22 10:49:57.693: INFO: stdout: "1:M 22 Jan 10:49:55.153 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 22 10:49:57.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-lw7sw redis-master --namespace=e2e-tests-kubectl-tdfsn --limit-bytes=1' Jan 22 10:49:57.869: INFO: stderr: "" Jan 22 10:49:57.869: INFO: stdout: " " STEP: exposing timestamps Jan 22 10:49:57.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-lw7sw redis-master --namespace=e2e-tests-kubectl-tdfsn --tail=1 --timestamps' Jan 22 10:49:58.050: INFO: stderr: "" Jan 22 10:49:58.050: INFO: stdout: "2020-01-22T10:49:55.1538837Z 1:M 22 Jan 10:49:55.153 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 22 10:50:00.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-lw7sw redis-master --namespace=e2e-tests-kubectl-tdfsn --since=1s' Jan 22 10:50:00.782: INFO: stderr: "" Jan 22 10:50:00.782: INFO: stdout: "" Jan 22 10:50:00.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-lw7sw redis-master --namespace=e2e-tests-kubectl-tdfsn --since=24h' Jan 22 10:50:00.942: INFO: stderr: "" Jan 22 10:50:00.942: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Jan 10:49:55.152 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Jan 10:49:55.152 # Server started, Redis version 3.2.12\n1:M 22 Jan 10:49:55.153 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Jan 10:49:55.153 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jan 22 10:50:00.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tdfsn' Jan 22 10:50:01.099: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 10:50:01.099: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 22 10:50:01.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-tdfsn' Jan 22 10:50:01.363: INFO: stderr: "No resources found.\n" Jan 22 10:50:01.364: INFO: stdout: "" Jan 22 10:50:01.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-tdfsn -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 22 10:50:01.584: INFO: stderr: "" Jan 22 10:50:01.584: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:50:01.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tdfsn" for this suite. Jan 22 10:50:08.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:50:08.615: INFO: namespace: e2e-tests-kubectl-tdfsn, resource: bindings, ignored listing per whitelist Jan 22 10:50:08.871: INFO: namespace e2e-tests-kubectl-tdfsn deletion completed in 7.277099764s • [SLOW TEST:23.909 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:50:08.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 22 10:50:09.158: INFO: Waiting up to 5m0s for pod "pod-f53646eb-3d04-11ea-ad91-0242ac110005" in namespace "e2e-tests-emptydir-tp6s2" to be "success or failure" Jan 22 10:50:09.171: INFO: Pod "pod-f53646eb-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.435109ms Jan 22 10:50:11.191: INFO: Pod "pod-f53646eb-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033001406s Jan 22 10:50:13.200: INFO: Pod "pod-f53646eb-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042480557s Jan 22 10:50:15.350: INFO: Pod "pod-f53646eb-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19236624s Jan 22 10:50:17.375: INFO: Pod "pod-f53646eb-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21710362s Jan 22 10:50:19.396: INFO: Pod "pod-f53646eb-3d04-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.238406973s STEP: Saw pod success Jan 22 10:50:19.396: INFO: Pod "pod-f53646eb-3d04-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 10:50:19.405: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f53646eb-3d04-11ea-ad91-0242ac110005 container test-container: STEP: delete the pod Jan 22 10:50:19.496: INFO: Waiting for pod pod-f53646eb-3d04-11ea-ad91-0242ac110005 to disappear Jan 22 10:50:19.502: INFO: Pod pod-f53646eb-3d04-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:50:19.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tp6s2" for this suite. Jan 22 10:50:25.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:50:25.637: INFO: namespace: e2e-tests-emptydir-tp6s2, resource: bindings, ignored listing per whitelist Jan 22 10:50:25.687: INFO: namespace e2e-tests-emptydir-tp6s2 deletion completed in 6.177217852s • [SLOW TEST:16.816 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:50:25.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 22 10:50:25.854: INFO: Waiting up to 5m0s for pod "pod-ff2b9a66-3d04-11ea-ad91-0242ac110005" in namespace "e2e-tests-emptydir-8k6m4" to be "success or failure" Jan 22 10:50:25.874: INFO: Pod "pod-ff2b9a66-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.430676ms Jan 22 10:50:27.885: INFO: Pod "pod-ff2b9a66-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030707246s Jan 22 10:50:29.911: INFO: Pod "pod-ff2b9a66-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056936736s Jan 22 10:50:31.965: INFO: Pod "pod-ff2b9a66-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110269093s Jan 22 10:50:34.048: INFO: Pod "pod-ff2b9a66-3d04-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.193090076s Jan 22 10:50:36.290: INFO: Pod "pod-ff2b9a66-3d04-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.435832933s STEP: Saw pod success Jan 22 10:50:36.290: INFO: Pod "pod-ff2b9a66-3d04-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 10:50:36.304: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ff2b9a66-3d04-11ea-ad91-0242ac110005 container test-container: STEP: delete the pod Jan 22 10:50:36.509: INFO: Waiting for pod pod-ff2b9a66-3d04-11ea-ad91-0242ac110005 to disappear Jan 22 10:50:36.526: INFO: Pod pod-ff2b9a66-3d04-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:50:36.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8k6m4" for this suite. Jan 22 10:50:42.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:50:42.798: INFO: namespace: e2e-tests-emptydir-8k6m4, resource: bindings, ignored listing per whitelist Jan 22 10:50:42.828: INFO: namespace e2e-tests-emptydir-8k6m4 deletion completed in 6.288929984s • [SLOW TEST:17.141 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:50:42.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 22 10:50:43.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-whtc4' Jan 22 10:50:43.472: INFO: stderr: "" Jan 22 10:50:43.472: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jan 22 10:50:43.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-whtc4' Jan 22 10:50:52.627: INFO: stderr: "" Jan 22 10:50:52.627: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:50:52.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-whtc4" for this suite. Jan 22 10:51:00.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:51:00.796: INFO: namespace: e2e-tests-kubectl-whtc4, resource: bindings, ignored listing per whitelist Jan 22 10:51:00.825: INFO: namespace e2e-tests-kubectl-whtc4 deletion completed in 8.182331448s • [SLOW TEST:17.996 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:51:00.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-1421e116-3d05-11ea-ad91-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 22 10:51:01.032: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-142328c5-3d05-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-hdjr9" to be "success or failure" Jan 22 10:51:01.064: INFO: Pod "pod-projected-configmaps-142328c5-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.822689ms Jan 22 10:51:03.077: INFO: Pod "pod-projected-configmaps-142328c5-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04468068s Jan 22 10:51:05.098: INFO: Pod "pod-projected-configmaps-142328c5-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065872066s Jan 22 10:51:07.113: INFO: Pod "pod-projected-configmaps-142328c5-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0806236s Jan 22 10:51:09.138: INFO: Pod "pod-projected-configmaps-142328c5-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105142198s Jan 22 10:51:11.416: INFO: Pod "pod-projected-configmaps-142328c5-3d05-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.383773865s STEP: Saw pod success Jan 22 10:51:11.416: INFO: Pod "pod-projected-configmaps-142328c5-3d05-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 10:51:11.430: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-142328c5-3d05-11ea-ad91-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 22 10:51:11.705: INFO: Waiting for pod pod-projected-configmaps-142328c5-3d05-11ea-ad91-0242ac110005 to disappear Jan 22 10:51:11.713: INFO: Pod pod-projected-configmaps-142328c5-3d05-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:51:11.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hdjr9" for this suite. Jan 22 10:51:17.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:51:17.896: INFO: namespace: e2e-tests-projected-hdjr9, resource: bindings, ignored listing per whitelist Jan 22 10:51:17.940: INFO: namespace e2e-tests-projected-hdjr9 deletion completed in 6.216459652s • [SLOW TEST:17.115 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:51:17.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 22 10:51:18.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e60fec9-3d05-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-ktk6q" to be "success or failure" Jan 22 10:51:18.249: INFO: Pod "downwardapi-volume-1e60fec9-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.328646ms Jan 22 10:51:20.417: INFO: Pod "downwardapi-volume-1e60fec9-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193720181s Jan 22 10:51:22.436: INFO: Pod "downwardapi-volume-1e60fec9-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212729143s Jan 22 10:51:24.454: INFO: Pod "downwardapi-volume-1e60fec9-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.230681735s Jan 22 10:51:26.481: INFO: Pod "downwardapi-volume-1e60fec9-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.258113121s Jan 22 10:51:28.507: INFO: Pod "downwardapi-volume-1e60fec9-3d05-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.283589624s STEP: Saw pod success Jan 22 10:51:28.507: INFO: Pod "downwardapi-volume-1e60fec9-3d05-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 10:51:28.537: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1e60fec9-3d05-11ea-ad91-0242ac110005 container client-container: STEP: delete the pod Jan 22 10:51:28.769: INFO: Waiting for pod downwardapi-volume-1e60fec9-3d05-11ea-ad91-0242ac110005 to disappear Jan 22 10:51:28.882: INFO: Pod downwardapi-volume-1e60fec9-3d05-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:51:28.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ktk6q" for this suite. Jan 22 10:51:34.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:51:35.145: INFO: namespace: e2e-tests-projected-ktk6q, resource: bindings, ignored listing per whitelist Jan 22 10:51:35.210: INFO: namespace e2e-tests-projected-ktk6q deletion completed in 6.319573675s • [SLOW TEST:17.270 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:51:35.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-nbz7d [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-nbz7d STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-nbz7d STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-nbz7d STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-nbz7d Jan 22 10:51:45.477: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-nbz7d, name: ss-0, uid: 2d08bced-3d05-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Jan 22 10:51:52.486: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-nbz7d, name: ss-0, uid: 2d08bced-3d05-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Jan 22 10:51:52.721: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-nbz7d, name: ss-0, uid: 2d08bced-3d05-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Jan 22 10:51:52.753: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-nbz7d STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-nbz7d STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-nbz7d and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 22 10:52:05.478: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nbz7d Jan 22 10:52:05.485: INFO: Scaling statefulset ss to 0 Jan 22 10:52:25.585: INFO: Waiting for statefulset status.replicas updated to 0 Jan 22 10:52:25.594: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:52:25.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-nbz7d" for this suite. Jan 22 10:52:31.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:52:31.841: INFO: namespace: e2e-tests-statefulset-nbz7d, resource: bindings, ignored listing per whitelist Jan 22 10:52:31.922: INFO: namespace e2e-tests-statefulset-nbz7d deletion completed in 6.278857663s • [SLOW TEST:56.711 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:52:31.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 22 10:52:32.163: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-4wj25,SelfLink:/api/v1/namespaces/e2e-tests-watch-4wj25/configmaps/e2e-watch-test-resource-version,UID:4a66f8f7-3d05-11ea-a994-fa163e34d433,ResourceVersion:19064397,Generation:0,CreationTimestamp:2020-01-22 10:52:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 22 10:52:32.164: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-4wj25,SelfLink:/api/v1/namespaces/e2e-tests-watch-4wj25/configmaps/e2e-watch-test-resource-version,UID:4a66f8f7-3d05-11ea-a994-fa163e34d433,ResourceVersion:19064398,Generation:0,CreationTimestamp:2020-01-22 10:52:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:52:32.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-4wj25" for this suite. Jan 22 10:52:38.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:52:38.644: INFO: namespace: e2e-tests-watch-4wj25, resource: bindings, ignored listing per whitelist Jan 22 10:52:38.648: INFO: namespace e2e-tests-watch-4wj25 deletion completed in 6.477338214s • [SLOW TEST:6.726 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:52:38.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-4e69b7fc-3d05-11ea-ad91-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 22 10:52:38.837: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4e6e26f4-3d05-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-56l4l" to be "success or failure" Jan 22 10:52:38.850: INFO: Pod "pod-projected-configmaps-4e6e26f4-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.955688ms Jan 22 10:52:41.101: INFO: Pod "pod-projected-configmaps-4e6e26f4-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263542124s Jan 22 10:52:43.149: INFO: Pod "pod-projected-configmaps-4e6e26f4-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31142308s Jan 22 10:52:45.541: INFO: Pod "pod-projected-configmaps-4e6e26f4-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.703254966s Jan 22 10:52:47.556: INFO: Pod "pod-projected-configmaps-4e6e26f4-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.718673545s Jan 22 10:52:49.955: INFO: Pod "pod-projected-configmaps-4e6e26f4-3d05-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.118015733s STEP: Saw pod success Jan 22 10:52:49.955: INFO: Pod "pod-projected-configmaps-4e6e26f4-3d05-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 10:52:50.389: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-4e6e26f4-3d05-11ea-ad91-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 22 10:52:50.468: INFO: Waiting for pod pod-projected-configmaps-4e6e26f4-3d05-11ea-ad91-0242ac110005 to disappear Jan 22 10:52:50.483: INFO: Pod pod-projected-configmaps-4e6e26f4-3d05-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:52:50.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-56l4l" for this suite. Jan 22 10:52:56.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:52:56.813: INFO: namespace: e2e-tests-projected-56l4l, resource: bindings, ignored listing per whitelist Jan 22 10:52:56.854: INFO: namespace e2e-tests-projected-56l4l deletion completed in 6.204191944s • [SLOW TEST:18.205 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:52:56.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-74w9 STEP: Creating a pod to test atomic-volume-subpath Jan 22 10:52:57.034: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-74w9" in namespace "e2e-tests-subpath-474nt" to be "success or failure" Jan 22 10:52:57.056: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Pending", Reason="", readiness=false. Elapsed: 22.192309ms Jan 22 10:52:59.085: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05061277s Jan 22 10:53:01.098: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06415957s Jan 22 10:53:03.112: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077995217s Jan 22 10:53:05.127: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092269918s Jan 22 10:53:07.153: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118881362s Jan 22 10:53:09.165: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.130368172s Jan 22 10:53:11.184: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.149925966s Jan 22 10:53:13.205: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Running", Reason="", readiness=false. Elapsed: 16.170452955s Jan 22 10:53:15.221: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Running", Reason="", readiness=false. Elapsed: 18.187129796s Jan 22 10:53:17.239: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Running", Reason="", readiness=false. Elapsed: 20.204576814s Jan 22 10:53:19.255: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Running", Reason="", readiness=false. Elapsed: 22.220713587s Jan 22 10:53:21.268: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Running", Reason="", readiness=false. Elapsed: 24.233992298s Jan 22 10:53:23.285: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Running", Reason="", readiness=false. Elapsed: 26.250719436s Jan 22 10:53:25.300: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Running", Reason="", readiness=false. Elapsed: 28.265830096s Jan 22 10:53:27.316: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Running", Reason="", readiness=false. Elapsed: 30.28138276s Jan 22 10:53:29.330: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Running", Reason="", readiness=false. Elapsed: 32.296152038s Jan 22 10:53:31.343: INFO: Pod "pod-subpath-test-projected-74w9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.308744933s STEP: Saw pod success Jan 22 10:53:31.343: INFO: Pod "pod-subpath-test-projected-74w9" satisfied condition "success or failure" Jan 22 10:53:31.349: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-74w9 container test-container-subpath-projected-74w9: STEP: delete the pod Jan 22 10:53:31.732: INFO: Waiting for pod pod-subpath-test-projected-74w9 to disappear Jan 22 10:53:31.973: INFO: Pod pod-subpath-test-projected-74w9 no longer exists STEP: Deleting pod pod-subpath-test-projected-74w9 Jan 22 10:53:31.973: INFO: Deleting pod "pod-subpath-test-projected-74w9" in namespace "e2e-tests-subpath-474nt" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:53:31.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-474nt" for this suite. Jan 22 10:53:38.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:53:38.219: INFO: namespace: e2e-tests-subpath-474nt, resource: bindings, ignored listing per whitelist Jan 22 10:53:38.379: INFO: namespace e2e-tests-subpath-474nt deletion completed in 6.365427994s • [SLOW TEST:41.525 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:53:38.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-72126764-3d05-11ea-ad91-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 22 10:53:38.650: INFO: Waiting up to 5m0s for pod "pod-configmaps-7212f943-3d05-11ea-ad91-0242ac110005" in namespace "e2e-tests-configmap-bds24" to be "success or failure" Jan 22 10:53:38.664: INFO: Pod "pod-configmaps-7212f943-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.553512ms Jan 22 10:53:40.685: INFO: Pod "pod-configmaps-7212f943-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035613804s Jan 22 10:53:42.698: INFO: Pod "pod-configmaps-7212f943-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048297002s Jan 22 10:53:44.740: INFO: Pod "pod-configmaps-7212f943-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090343013s Jan 22 10:53:46.751: INFO: Pod "pod-configmaps-7212f943-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101055067s Jan 22 10:53:48.775: INFO: Pod "pod-configmaps-7212f943-3d05-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.125463392s STEP: Saw pod success Jan 22 10:53:48.775: INFO: Pod "pod-configmaps-7212f943-3d05-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 10:53:48.780: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7212f943-3d05-11ea-ad91-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 22 10:53:49.041: INFO: Waiting for pod pod-configmaps-7212f943-3d05-11ea-ad91-0242ac110005 to disappear Jan 22 10:53:49.242: INFO: Pod pod-configmaps-7212f943-3d05-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:53:49.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-bds24" for this suite. Jan 22 10:53:55.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:53:55.397: INFO: namespace: e2e-tests-configmap-bds24, resource: bindings, ignored listing per whitelist Jan 22 10:53:55.462: INFO: namespace e2e-tests-configmap-bds24 deletion completed in 6.206476593s • [SLOW TEST:17.083 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:53:55.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0122 10:54:05.799587 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 22 10:54:05.799: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:54:05.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-99jdv" for this suite. Jan 22 10:54:11.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:54:11.973: INFO: namespace: e2e-tests-gc-99jdv, resource: bindings, ignored listing per whitelist Jan 22 10:54:12.071: INFO: namespace e2e-tests-gc-99jdv deletion completed in 6.262417598s • [SLOW TEST:16.608 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:54:12.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-863b233d-3d05-11ea-ad91-0242ac110005 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-863b233d-3d05-11ea-ad91-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:55:51.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kzpft" for this suite. Jan 22 10:56:15.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:56:15.235: INFO: namespace: e2e-tests-projected-kzpft, resource: bindings, ignored listing per whitelist Jan 22 10:56:15.283: INFO: namespace e2e-tests-projected-kzpft deletion completed in 24.248231941s • [SLOW TEST:123.212 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:56:15.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xtx6q STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 22 10:56:15.568: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 22 10:56:48.046: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-xtx6q PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 10:56:48.046: INFO: >>> kubeConfig: /root/.kube/config I0122 10:56:48.144580 8 log.go:172] (0xc00090f1e0) (0xc00141f5e0) Create stream I0122 10:56:48.144683 8 log.go:172] (0xc00090f1e0) (0xc00141f5e0) Stream added, broadcasting: 1 I0122 10:56:48.152616 8 log.go:172] (0xc00090f1e0) Reply frame received for 1 I0122 10:56:48.152728 8 log.go:172] (0xc00090f1e0) (0xc0019ed220) Create stream I0122 10:56:48.152757 8 log.go:172] (0xc00090f1e0) (0xc0019ed220) Stream added, broadcasting: 3 I0122 10:56:48.156053 8 log.go:172] (0xc00090f1e0) Reply frame received for 3 I0122 10:56:48.156092 8 log.go:172] (0xc00090f1e0) (0xc0019ed2c0) Create stream I0122 10:56:48.156107 8 log.go:172] (0xc00090f1e0) (0xc0019ed2c0) Stream added, broadcasting: 5 I0122 10:56:48.158093 8 log.go:172] (0xc00090f1e0) Reply frame received for 5 I0122 10:56:48.365932 8 log.go:172] (0xc00090f1e0) Data frame received for 3 I0122 10:56:48.365994 8 log.go:172] (0xc0019ed220) (3) Data frame handling I0122 10:56:48.366011 8 log.go:172] (0xc0019ed220) (3) Data frame sent I0122 10:56:48.565390 8 log.go:172] (0xc00090f1e0) Data frame received for 1 I0122 10:56:48.565596 8 log.go:172] (0xc00090f1e0) (0xc0019ed220) Stream removed, broadcasting: 3 I0122 10:56:48.565717 8 log.go:172] (0xc00141f5e0) (1) Data frame handling I0122 10:56:48.565775 8 log.go:172] (0xc00141f5e0) (1) Data frame sent I0122 10:56:48.566030 8 log.go:172] (0xc00090f1e0) (0xc00141f5e0) Stream removed, broadcasting: 1 I0122 10:56:48.566162 8 log.go:172] (0xc00090f1e0) (0xc0019ed2c0) Stream removed, broadcasting: 5 I0122 10:56:48.566200 8 log.go:172] (0xc00090f1e0) Go away received I0122 10:56:48.566772 8 log.go:172] (0xc00090f1e0) (0xc00141f5e0) Stream removed, broadcasting: 1 I0122 10:56:48.566791 8 log.go:172] (0xc00090f1e0) (0xc0019ed220) Stream removed, broadcasting: 3 I0122 10:56:48.566802 8 log.go:172] (0xc00090f1e0) (0xc0019ed2c0) Stream removed, broadcasting: 5 Jan 22 10:56:48.566: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:56:48.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-xtx6q" for this suite. Jan 22 10:57:12.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:57:12.737: INFO: namespace: e2e-tests-pod-network-test-xtx6q, resource: bindings, ignored listing per whitelist Jan 22 10:57:12.923: INFO: namespace e2e-tests-pod-network-test-xtx6q deletion completed in 24.302838617s • [SLOW TEST:57.640 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:57:12.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f1f2593b-3d05-11ea-ad91-0242ac110005 STEP: Creating a pod to test consume secrets Jan 22 10:57:13.175: INFO: Waiting up to 5m0s for pod "pod-secrets-f1f3ca73-3d05-11ea-ad91-0242ac110005" in namespace "e2e-tests-secrets-d66zg" to be "success or failure" Jan 22 10:57:13.197: INFO: Pod "pod-secrets-f1f3ca73-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.399364ms Jan 22 10:57:15.233: INFO: Pod "pod-secrets-f1f3ca73-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057408976s Jan 22 10:57:17.248: INFO: Pod "pod-secrets-f1f3ca73-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072664705s Jan 22 10:57:19.408: INFO: Pod "pod-secrets-f1f3ca73-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.232714468s Jan 22 10:57:21.422: INFO: Pod "pod-secrets-f1f3ca73-3d05-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.246687658s Jan 22 10:57:23.437: INFO: Pod "pod-secrets-f1f3ca73-3d05-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.261945526s STEP: Saw pod success Jan 22 10:57:23.437: INFO: Pod "pod-secrets-f1f3ca73-3d05-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 10:57:23.448: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f1f3ca73-3d05-11ea-ad91-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 22 10:57:24.552: INFO: Waiting for pod pod-secrets-f1f3ca73-3d05-11ea-ad91-0242ac110005 to disappear Jan 22 10:57:24.611: INFO: Pod pod-secrets-f1f3ca73-3d05-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 10:57:24.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-d66zg" for this suite. Jan 22 10:57:30.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 10:57:30.920: INFO: namespace: e2e-tests-secrets-d66zg, resource: bindings, ignored listing per whitelist Jan 22 10:57:30.957: INFO: namespace e2e-tests-secrets-d66zg deletion completed in 6.264773982s • [SLOW TEST:18.034 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 10:57:30.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-vmhxf [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jan 22 10:57:31.217: INFO: Found 0 stateful pods, waiting for 3 Jan 22 10:57:41.453: INFO: Found 2 stateful pods, waiting for 3 Jan 22 10:57:51.243: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 10:57:51.243: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 22 10:57:51.243: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 22 10:58:01.237: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 10:58:01.237: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 22 10:58:01.237: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 22 10:58:01.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vmhxf ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 10:58:01.879: INFO: stderr: "I0122 10:58:01.490374 309 log.go:172] (0xc000138840) (0xc0005e1400) Create stream\nI0122 10:58:01.490725 309 log.go:172] (0xc000138840) (0xc0005e1400) Stream added, broadcasting: 1\nI0122 10:58:01.497123 309 log.go:172] (0xc000138840) Reply frame received for 1\nI0122 10:58:01.497187 309 log.go:172] (0xc000138840) (0xc0005d2000) Create stream\nI0122 10:58:01.497207 309 log.go:172] (0xc000138840) (0xc0005d2000) Stream added, broadcasting: 3\nI0122 10:58:01.498705 309 log.go:172] (0xc000138840) Reply frame received for 3\nI0122 10:58:01.498755 309 log.go:172] (0xc000138840) (0xc0005e14a0) Create stream\nI0122 10:58:01.498784 309 log.go:172] (0xc000138840) (0xc0005e14a0) Stream added, broadcasting: 5\nI0122 10:58:01.500096 309 log.go:172] (0xc000138840) Reply frame received for 5\nI0122 10:58:01.718490 309 log.go:172] (0xc000138840) Data frame received for 3\nI0122 10:58:01.718631 309 log.go:172] (0xc0005d2000) (3) Data frame handling\nI0122 10:58:01.718874 309 log.go:172] (0xc0005d2000) (3) Data frame sent\nI0122 10:58:01.868368 309 log.go:172] (0xc000138840) Data frame received for 1\nI0122 10:58:01.868524 309 log.go:172] (0xc000138840) (0xc0005d2000) Stream removed, broadcasting: 3\nI0122 10:58:01.868581 309 log.go:172] (0xc0005e1400) (1) Data frame handling\nI0122 10:58:01.868602 309 log.go:172] (0xc0005e1400) (1) Data frame sent\nI0122 10:58:01.868615 309 log.go:172] (0xc000138840) (0xc0005e1400) Stream removed, broadcasting: 1\nI0122 10:58:01.869212 309 log.go:172] (0xc000138840) (0xc0005e14a0) Stream removed, broadcasting: 5\nI0122 10:58:01.869257 309 log.go:172] (0xc000138840) (0xc0005e1400) Stream removed, broadcasting: 1\nI0122 10:58:01.869271 309 log.go:172] (0xc000138840) (0xc0005d2000) Stream removed, broadcasting: 3\nI0122 10:58:01.869283 309 log.go:172] (0xc000138840) (0xc0005e14a0) Stream removed, broadcasting: 5\n" Jan 22 10:58:01.880: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 10:58:01.880: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 22 10:58:11.964: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 22 10:58:22.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vmhxf ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 10:58:22.934: INFO: stderr: "I0122 10:58:22.328101 331 log.go:172] (0xc0003f8160) (0xc0007ab4a0) Create stream\nI0122 10:58:22.328319 331 log.go:172] (0xc0003f8160) (0xc0007ab4a0) Stream added, broadcasting: 1\nI0122 10:58:22.333248 331 log.go:172] (0xc0003f8160) Reply frame received for 1\nI0122 10:58:22.333292 331 log.go:172] (0xc0003f8160) (0xc0002e6000) Create stream\nI0122 10:58:22.333300 331 log.go:172] (0xc0003f8160) (0xc0002e6000) Stream added, broadcasting: 3\nI0122 10:58:22.336312 331 log.go:172] (0xc0003f8160) Reply frame received for 3\nI0122 10:58:22.336337 331 log.go:172] (0xc0003f8160) (0xc0007ab540) Create stream\nI0122 10:58:22.336344 331 log.go:172] (0xc0003f8160) (0xc0007ab540) Stream added, broadcasting: 5\nI0122 10:58:22.337918 331 log.go:172] (0xc0003f8160) Reply frame received for 5\nI0122 10:58:22.502806 331 log.go:172] (0xc0003f8160) Data frame received for 3\nI0122 10:58:22.503034 331 log.go:172] (0xc0002e6000) (3) Data frame handling\nI0122 10:58:22.503088 331 log.go:172] (0xc0002e6000) (3) Data frame sent\nI0122 10:58:22.922866 331 log.go:172] (0xc0003f8160) Data frame received for 1\nI0122 10:58:22.923066 331 log.go:172] (0xc0007ab4a0) (1) Data frame handling\nI0122 10:58:22.923137 331 log.go:172] (0xc0007ab4a0) (1) Data frame sent\nI0122 10:58:22.923306 331 log.go:172] (0xc0003f8160) (0xc0007ab4a0) Stream removed, broadcasting: 1\nI0122 10:58:22.923380 331 log.go:172] (0xc0003f8160) (0xc0007ab540) Stream removed, broadcasting: 5\nI0122 10:58:22.923464 331 log.go:172] (0xc0003f8160) (0xc0002e6000) Stream removed, broadcasting: 3\nI0122 10:58:22.923596 331 log.go:172] (0xc0003f8160) Go away received\nI0122 10:58:22.923938 331 log.go:172] (0xc0003f8160) (0xc0007ab4a0) Stream removed, broadcasting: 1\nI0122 10:58:22.923955 331 log.go:172] (0xc0003f8160) (0xc0002e6000) Stream removed, broadcasting: 3\nI0122 10:58:22.923958 331 log.go:172] (0xc0003f8160) (0xc0007ab540) Stream removed, broadcasting: 5\n" Jan 22 10:58:22.934: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 22 10:58:22.934: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 22 10:58:23.000: INFO: Waiting for StatefulSet e2e-tests-statefulset-vmhxf/ss2 to complete update Jan 22 10:58:23.001: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 10:58:23.001: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 10:58:23.001: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 10:58:33.042: INFO: Waiting for StatefulSet e2e-tests-statefulset-vmhxf/ss2 to complete update Jan 22 10:58:33.042: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 10:58:33.042: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 10:58:43.051: INFO: Waiting for StatefulSet e2e-tests-statefulset-vmhxf/ss2 to complete update Jan 22 10:58:43.051: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 10:58:43.052: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 10:58:53.034: INFO: Waiting for StatefulSet e2e-tests-statefulset-vmhxf/ss2 to complete update Jan 22 10:58:53.034: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 10:59:03.185: INFO: Waiting for StatefulSet e2e-tests-statefulset-vmhxf/ss2 to complete update Jan 22 10:59:03.185: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 10:59:13.447: INFO: Waiting for StatefulSet e2e-tests-statefulset-vmhxf/ss2 to complete update STEP: Rolling back to a previous revision Jan 22 10:59:23.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vmhxf ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 10:59:23.746: INFO: stderr: "I0122 10:59:23.309633 352 log.go:172] (0xc000606420) (0xc00066f360) Create stream\nI0122 10:59:23.309860 352 log.go:172] (0xc000606420) (0xc00066f360) Stream added, broadcasting: 1\nI0122 10:59:23.315351 352 log.go:172] (0xc000606420) Reply frame received for 1\nI0122 10:59:23.315382 352 log.go:172] (0xc000606420) (0xc00066f400) Create stream\nI0122 10:59:23.315391 352 log.go:172] (0xc000606420) (0xc00066f400) Stream added, broadcasting: 3\nI0122 10:59:23.316488 352 log.go:172] (0xc000606420) Reply frame received for 3\nI0122 10:59:23.316506 352 log.go:172] (0xc000606420) (0xc00066f4a0) Create stream\nI0122 10:59:23.316515 352 log.go:172] (0xc000606420) (0xc00066f4a0) Stream added, broadcasting: 5\nI0122 10:59:23.318213 352 log.go:172] (0xc000606420) Reply frame received for 5\nI0122 10:59:23.548309 352 log.go:172] (0xc000606420) Data frame received for 3\nI0122 10:59:23.548439 352 log.go:172] (0xc00066f400) (3) Data frame handling\nI0122 10:59:23.548480 352 log.go:172] (0xc00066f400) (3) Data frame sent\nI0122 10:59:23.737121 352 log.go:172] (0xc000606420) (0xc00066f400) Stream removed, broadcasting: 3\nI0122 10:59:23.737336 352 log.go:172] (0xc000606420) Data frame received for 1\nI0122 10:59:23.737394 352 log.go:172] (0xc000606420) (0xc00066f4a0) Stream removed, broadcasting: 5\nI0122 10:59:23.737429 352 log.go:172] (0xc00066f360) (1) Data frame handling\nI0122 10:59:23.737450 352 log.go:172] (0xc00066f360) (1) Data frame sent\nI0122 10:59:23.737459 352 log.go:172] (0xc000606420) (0xc00066f360) Stream removed, broadcasting: 1\nI0122 10:59:23.737478 352 log.go:172] (0xc000606420) Go away received\nI0122 10:59:23.738033 352 log.go:172] (0xc000606420) (0xc00066f360) Stream removed, broadcasting: 1\nI0122 10:59:23.738043 352 log.go:172] (0xc000606420) (0xc00066f400) Stream removed, broadcasting: 3\nI0122 10:59:23.738048 352 log.go:172] (0xc000606420) (0xc00066f4a0) Stream removed, broadcasting: 5\n" Jan 22 10:59:23.746: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 10:59:23.746: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 22 10:59:33.958: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 22 10:59:45.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vmhxf ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 10:59:45.733: INFO: stderr: "I0122 10:59:45.264564 375 log.go:172] (0xc00015c6e0) (0xc00074e640) Create stream\nI0122 10:59:45.264682 375 log.go:172] (0xc00015c6e0) (0xc00074e640) Stream added, broadcasting: 1\nI0122 10:59:45.270236 375 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0122 10:59:45.270347 375 log.go:172] (0xc00015c6e0) (0xc000656c80) Create stream\nI0122 10:59:45.270363 375 log.go:172] (0xc00015c6e0) (0xc000656c80) Stream added, broadcasting: 3\nI0122 10:59:45.271602 375 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0122 10:59:45.271629 375 log.go:172] (0xc00015c6e0) (0xc0006e4000) Create stream\nI0122 10:59:45.271640 375 log.go:172] (0xc00015c6e0) (0xc0006e4000) Stream added, broadcasting: 5\nI0122 10:59:45.272509 375 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0122 10:59:45.446365 375 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0122 10:59:45.446590 375 log.go:172] (0xc000656c80) (3) Data frame handling\nI0122 10:59:45.446632 375 log.go:172] (0xc000656c80) (3) Data frame sent\nI0122 10:59:45.721661 375 log.go:172] (0xc00015c6e0) (0xc000656c80) Stream removed, broadcasting: 3\nI0122 10:59:45.722026 375 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0122 10:59:45.722043 375 log.go:172] (0xc00074e640) (1) Data frame handling\nI0122 10:59:45.722068 375 log.go:172] (0xc00074e640) (1) Data frame sent\nI0122 10:59:45.722163 375 log.go:172] (0xc00015c6e0) (0xc00074e640) Stream removed, broadcasting: 1\nI0122 10:59:45.722246 375 log.go:172] (0xc00015c6e0) (0xc0006e4000) Stream removed, broadcasting: 5\nI0122 10:59:45.722343 375 log.go:172] (0xc00015c6e0) Go away received\nI0122 10:59:45.722900 375 log.go:172] (0xc00015c6e0) (0xc00074e640) Stream removed, broadcasting: 1\nI0122 10:59:45.722987 375 log.go:172] (0xc00015c6e0) (0xc000656c80) Stream removed, broadcasting: 3\nI0122 10:59:45.723022 375 log.go:172] (0xc00015c6e0) (0xc0006e4000) Stream removed, broadcasting: 5\n" Jan 22 10:59:45.733: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 22 10:59:45.733: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 22 10:59:45.846: INFO: Waiting for StatefulSet e2e-tests-statefulset-vmhxf/ss2 to complete update Jan 22 10:59:45.846: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 10:59:45.846: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 10:59:45.846: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 10:59:55.873: INFO: Waiting for StatefulSet e2e-tests-statefulset-vmhxf/ss2 to complete update Jan 22 10:59:55.873: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 10:59:55.873: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 10:59:55.873: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 11:00:05.895: INFO: Waiting for StatefulSet e2e-tests-statefulset-vmhxf/ss2 to complete update Jan 22 11:00:05.895: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 11:00:05.895: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 11:00:15.893: INFO: Waiting for StatefulSet e2e-tests-statefulset-vmhxf/ss2 to complete update Jan 22 11:00:15.893: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 11:00:15.893: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 11:00:25.901: INFO: Waiting for StatefulSet e2e-tests-statefulset-vmhxf/ss2 to complete update Jan 22 11:00:25.901: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 11:00:35.880: INFO: Waiting for StatefulSet e2e-tests-statefulset-vmhxf/ss2 to complete update Jan 22 11:00:35.880: INFO: Waiting for Pod e2e-tests-statefulset-vmhxf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 22 11:00:45.882: INFO: Waiting for StatefulSet e2e-tests-statefulset-vmhxf/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 22 11:00:55.888: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vmhxf Jan 22 11:00:55.906: INFO: Scaling statefulset ss2 to 0 Jan 22 11:01:36.001: INFO: Waiting for statefulset status.replicas updated to 0 Jan 22 11:01:36.008: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:01:36.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-vmhxf" for this suite. Jan 22 11:01:44.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:01:44.169: INFO: namespace: e2e-tests-statefulset-vmhxf, resource: bindings, ignored listing per whitelist Jan 22 11:01:44.229: INFO: namespace e2e-tests-statefulset-vmhxf deletion completed in 8.178202739s • [SLOW TEST:253.272 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:01:44.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jan 22 11:01:44.506: INFO: Waiting up to 5m0s for pod "client-containers-93a3a435-3d06-11ea-ad91-0242ac110005" in namespace "e2e-tests-containers-jpgsw" to be "success or failure" Jan 22 11:01:44.519: INFO: Pod "client-containers-93a3a435-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.337655ms Jan 22 11:01:46.550: INFO: Pod "client-containers-93a3a435-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043762066s Jan 22 11:01:48.583: INFO: Pod "client-containers-93a3a435-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076095818s Jan 22 11:01:50.813: INFO: Pod "client-containers-93a3a435-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.306387122s Jan 22 11:01:52.828: INFO: Pod "client-containers-93a3a435-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.321521458s Jan 22 11:01:54.846: INFO: Pod "client-containers-93a3a435-3d06-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.339634131s STEP: Saw pod success Jan 22 11:01:54.847: INFO: Pod "client-containers-93a3a435-3d06-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:01:54.862: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-93a3a435-3d06-11ea-ad91-0242ac110005 container test-container: STEP: delete the pod Jan 22 11:01:55.031: INFO: Waiting for pod client-containers-93a3a435-3d06-11ea-ad91-0242ac110005 to disappear Jan 22 11:01:55.037: INFO: Pod client-containers-93a3a435-3d06-11ea-ad91-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:01:55.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-jpgsw" for this suite. Jan 22 11:02:01.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:02:01.217: INFO: namespace: e2e-tests-containers-jpgsw, resource: bindings, ignored listing per whitelist Jan 22 11:02:01.229: INFO: namespace e2e-tests-containers-jpgsw deletion completed in 6.184814637s • [SLOW TEST:16.999 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:02:01.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xljpc Jan 22 11:02:11.499: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xljpc STEP: checking the pod's current state and verifying that restartCount is present Jan 22 11:02:11.507: INFO: Initial restart count of pod liveness-http is 0 Jan 22 11:02:32.429: INFO: Restart count of pod e2e-tests-container-probe-xljpc/liveness-http is now 1 (20.921572338s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:02:32.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xljpc" for this suite. Jan 22 11:02:38.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:02:38.875: INFO: namespace: e2e-tests-container-probe-xljpc, resource: bindings, ignored listing per whitelist Jan 22 11:02:38.945: INFO: namespace e2e-tests-container-probe-xljpc deletion completed in 6.340141068s • [SLOW TEST:37.716 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:02:38.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jan 22 11:02:39.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-vvm2j run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 22 11:02:51.517: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0122 11:02:50.312686 396 log.go:172] (0xc0005a4160) (0xc0002a68c0) Create stream\nI0122 11:02:50.312953 396 log.go:172] (0xc0005a4160) (0xc0002a68c0) Stream added, broadcasting: 1\nI0122 11:02:50.322627 396 log.go:172] (0xc0005a4160) Reply frame received for 1\nI0122 11:02:50.322762 396 log.go:172] (0xc0005a4160) (0xc0002a6960) Create stream\nI0122 11:02:50.322771 396 log.go:172] (0xc0005a4160) (0xc0002a6960) Stream added, broadcasting: 3\nI0122 11:02:50.323945 396 log.go:172] (0xc0005a4160) Reply frame received for 3\nI0122 11:02:50.323984 396 log.go:172] (0xc0005a4160) (0xc0000c8000) Create stream\nI0122 11:02:50.324007 396 log.go:172] (0xc0005a4160) (0xc0000c8000) Stream added, broadcasting: 5\nI0122 11:02:50.324967 396 log.go:172] (0xc0005a4160) Reply frame received for 5\nI0122 11:02:50.325036 396 log.go:172] (0xc0005a4160) (0xc0002a6a00) Create stream\nI0122 11:02:50.325058 396 log.go:172] (0xc0005a4160) (0xc0002a6a00) Stream added, broadcasting: 7\nI0122 11:02:50.326381 396 log.go:172] (0xc0005a4160) Reply frame received for 7\nI0122 11:02:50.326956 396 log.go:172] (0xc0002a6960) (3) Writing data frame\nI0122 11:02:50.327146 396 log.go:172] (0xc0002a6960) (3) Writing data frame\nI0122 11:02:50.333891 396 log.go:172] (0xc0005a4160) Data frame received for 5\nI0122 11:02:50.333918 396 log.go:172] (0xc0000c8000) (5) Data frame handling\nI0122 11:02:50.333935 396 log.go:172] (0xc0000c8000) (5) Data frame sent\nI0122 11:02:50.335756 396 log.go:172] (0xc0005a4160) Data frame received for 5\nI0122 11:02:50.335783 396 log.go:172] (0xc0000c8000) (5) Data frame handling\nI0122 11:02:50.335810 396 log.go:172] (0xc0000c8000) (5) Data frame sent\nI0122 11:02:51.467952 396 log.go:172] (0xc0005a4160) Data frame received for 1\nI0122 11:02:51.468206 396 log.go:172] (0xc0005a4160) (0xc0002a6960) Stream removed, broadcasting: 3\nI0122 11:02:51.468316 396 log.go:172] (0xc0002a68c0) (1) Data frame handling\nI0122 11:02:51.468353 396 log.go:172] (0xc0002a68c0) (1) Data frame sent\nI0122 11:02:51.468573 396 log.go:172] (0xc0005a4160) (0xc0002a68c0) Stream removed, broadcasting: 1\nI0122 11:02:51.468801 396 log.go:172] (0xc0005a4160) (0xc0000c8000) Stream removed, broadcasting: 5\nI0122 11:02:51.468833 396 log.go:172] (0xc0005a4160) (0xc0002a6a00) Stream removed, broadcasting: 7\nI0122 11:02:51.468863 396 log.go:172] (0xc0005a4160) Go away received\nI0122 11:02:51.469170 396 log.go:172] (0xc0005a4160) (0xc0002a68c0) Stream removed, broadcasting: 1\nI0122 11:02:51.469189 396 log.go:172] (0xc0005a4160) (0xc0002a6960) Stream removed, broadcasting: 3\nI0122 11:02:51.469216 396 log.go:172] (0xc0005a4160) (0xc0000c8000) Stream removed, broadcasting: 5\nI0122 11:02:51.469236 396 log.go:172] (0xc0005a4160) (0xc0002a6a00) Stream removed, broadcasting: 7\n" Jan 22 11:02:51.518: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:02:54.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vvm2j" for this suite. Jan 22 11:03:00.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:03:00.540: INFO: namespace: e2e-tests-kubectl-vvm2j, resource: bindings, ignored listing per whitelist Jan 22 11:03:00.731: INFO: namespace e2e-tests-kubectl-vvm2j deletion completed in 6.612572319s • [SLOW TEST:21.786 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:03:00.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 22 11:03:01.041: INFO: Waiting up to 5m0s for pod "pod-c14aa804-3d06-11ea-ad91-0242ac110005" in namespace "e2e-tests-emptydir-kl4f4" to be "success or failure" Jan 22 11:03:01.050: INFO: Pod "pod-c14aa804-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.668129ms Jan 22 11:03:03.063: INFO: Pod "pod-c14aa804-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022463047s Jan 22 11:03:05.229: INFO: Pod "pod-c14aa804-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188331362s Jan 22 11:03:07.319: INFO: Pod "pod-c14aa804-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.278475772s Jan 22 11:03:09.485: INFO: Pod "pod-c14aa804-3d06-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.443936967s STEP: Saw pod success Jan 22 11:03:09.485: INFO: Pod "pod-c14aa804-3d06-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:03:09.494: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c14aa804-3d06-11ea-ad91-0242ac110005 container test-container: STEP: delete the pod Jan 22 11:03:09.705: INFO: Waiting for pod pod-c14aa804-3d06-11ea-ad91-0242ac110005 to disappear Jan 22 11:03:09.716: INFO: Pod pod-c14aa804-3d06-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:03:09.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kl4f4" for this suite. Jan 22 11:03:15.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:03:15.987: INFO: namespace: e2e-tests-emptydir-kl4f4, resource: bindings, ignored listing per whitelist Jan 22 11:03:16.003: INFO: namespace e2e-tests-emptydir-kl4f4 deletion completed in 6.268412233s • [SLOW TEST:15.270 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:03:16.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-gx2c9.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gx2c9.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-gx2c9.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-gx2c9.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gx2c9.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-gx2c9.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 22 11:03:28.287: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-gx2c9/dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005) Jan 22 11:03:28.293: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-gx2c9/dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005) Jan 22 11:03:28.306: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-gx2c9/dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005) Jan 22 11:03:28.313: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-gx2c9/dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005) Jan 22 11:03:28.318: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-gx2c9/dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005) Jan 22 11:03:28.340: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-gx2c9/dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005) Jan 22 11:03:28.451: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gx2c9.svc.cluster.local from pod e2e-tests-dns-gx2c9/dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005) Jan 22 11:03:28.467: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-gx2c9/dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005) Jan 22 11:03:28.473: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-gx2c9/dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005) Jan 22 11:03:28.478: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-gx2c9/dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005) Jan 22 11:03:28.793: INFO: Lookups using e2e-tests-dns-gx2c9/dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gx2c9.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord] Jan 22 11:03:34.086: INFO: DNS probes using e2e-tests-dns-gx2c9/dns-test-ca53db9d-3d06-11ea-ad91-0242ac110005 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:03:34.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-gx2c9" for this suite. Jan 22 11:03:42.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:03:42.333: INFO: namespace: e2e-tests-dns-gx2c9, resource: bindings, ignored listing per whitelist Jan 22 11:03:42.391: INFO: namespace e2e-tests-dns-gx2c9 deletion completed in 8.188600323s • [SLOW TEST:26.389 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:03:42.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-da1b1583-3d06-11ea-ad91-0242ac110005 STEP: Creating a pod to test consume secrets Jan 22 11:03:42.770: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da1cbfa8-3d06-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-62dvv" to be "success or failure" Jan 22 11:03:42.796: INFO: Pod "pod-projected-secrets-da1cbfa8-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.381089ms Jan 22 11:03:45.018: INFO: Pod "pod-projected-secrets-da1cbfa8-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248349638s Jan 22 11:03:47.062: INFO: Pod "pod-projected-secrets-da1cbfa8-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292202516s Jan 22 11:03:49.396: INFO: Pod "pod-projected-secrets-da1cbfa8-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.625974632s Jan 22 11:03:51.423: INFO: Pod "pod-projected-secrets-da1cbfa8-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.652786589s Jan 22 11:03:53.445: INFO: Pod "pod-projected-secrets-da1cbfa8-3d06-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.674853628s STEP: Saw pod success Jan 22 11:03:53.445: INFO: Pod "pod-projected-secrets-da1cbfa8-3d06-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:03:53.451: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-da1cbfa8-3d06-11ea-ad91-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 22 11:03:53.506: INFO: Waiting for pod pod-projected-secrets-da1cbfa8-3d06-11ea-ad91-0242ac110005 to disappear Jan 22 11:03:53.522: INFO: Pod pod-projected-secrets-da1cbfa8-3d06-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:03:53.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-62dvv" for this suite. Jan 22 11:03:59.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:03:59.767: INFO: namespace: e2e-tests-projected-62dvv, resource: bindings, ignored listing per whitelist Jan 22 11:03:59.807: INFO: namespace e2e-tests-projected-62dvv deletion completed in 6.275189863s • [SLOW TEST:17.415 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:03:59.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jan 22 11:04:10.118: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-e46afd31-3d06-11ea-ad91-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-4sx86", SelfLink:"/api/v1/namespaces/e2e-tests-pods-4sx86/pods/pod-submit-remove-e46afd31-3d06-11ea-ad91-0242ac110005", UID:"e4746e29-3d06-11ea-a994-fa163e34d433", ResourceVersion:"19065999", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715287840, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"953657005"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-z54r2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001dafb80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z54r2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002018618), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001cea720), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002018650)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002018670)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002018678), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00201867c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715287840, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715287848, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715287848, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715287840, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00179e340), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00179e360), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://f2c951c330989ec98df198a7f7727caf084650d5802c42bfdf95d1fd2af733bb"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:04:16.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-4sx86" for this suite. Jan 22 11:04:22.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:04:23.158: INFO: namespace: e2e-tests-pods-4sx86, resource: bindings, ignored listing per whitelist Jan 22 11:04:23.212: INFO: namespace e2e-tests-pods-4sx86 deletion completed in 6.265741024s • [SLOW TEST:23.405 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:04:23.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 22 11:04:23.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-lgb8c' Jan 22 11:04:23.681: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 22 11:04:23.681: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jan 22 11:04:25.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-lgb8c' Jan 22 11:04:26.159: INFO: stderr: "" Jan 22 11:04:26.160: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:04:26.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lgb8c" for this suite. Jan 22 11:04:42.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:04:42.517: INFO: namespace: e2e-tests-kubectl-lgb8c, resource: bindings, ignored listing per whitelist Jan 22 11:04:42.570: INFO: namespace e2e-tests-kubectl-lgb8c deletion completed in 16.393511914s • [SLOW TEST:19.357 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:04:42.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-fdf8c282-3d06-11ea-ad91-0242ac110005 STEP: Creating a pod to test consume secrets Jan 22 11:04:42.895: INFO: Waiting up to 5m0s for pod "pod-secrets-fdfb2010-3d06-11ea-ad91-0242ac110005" in namespace "e2e-tests-secrets-v2qmg" to be "success or failure" Jan 22 11:04:42.998: INFO: Pod "pod-secrets-fdfb2010-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 102.853759ms Jan 22 11:04:45.062: INFO: Pod "pod-secrets-fdfb2010-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167576899s Jan 22 11:04:47.078: INFO: Pod "pod-secrets-fdfb2010-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182891434s Jan 22 11:04:49.101: INFO: Pod "pod-secrets-fdfb2010-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206313954s Jan 22 11:04:51.160: INFO: Pod "pod-secrets-fdfb2010-3d06-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265634651s Jan 22 11:04:53.362: INFO: Pod "pod-secrets-fdfb2010-3d06-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.46700565s STEP: Saw pod success Jan 22 11:04:53.362: INFO: Pod "pod-secrets-fdfb2010-3d06-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:04:53.375: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-fdfb2010-3d06-11ea-ad91-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 22 11:04:53.507: INFO: Waiting for pod pod-secrets-fdfb2010-3d06-11ea-ad91-0242ac110005 to disappear Jan 22 11:04:53.520: INFO: Pod pod-secrets-fdfb2010-3d06-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:04:53.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-v2qmg" for this suite. Jan 22 11:04:59.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:04:59.708: INFO: namespace: e2e-tests-secrets-v2qmg, resource: bindings, ignored listing per whitelist Jan 22 11:04:59.814: INFO: namespace e2e-tests-secrets-v2qmg deletion completed in 6.284220629s • [SLOW TEST:17.244 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:04:59.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 22 11:05:00.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4j9sf' Jan 22 11:05:00.292: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 22 11:05:00.292: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jan 22 11:05:00.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-4j9sf' Jan 22 11:05:00.494: INFO: stderr: "" Jan 22 11:05:00.494: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:05:00.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4j9sf" for this suite. Jan 22 11:05:24.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:05:24.666: INFO: namespace: e2e-tests-kubectl-4j9sf, resource: bindings, ignored listing per whitelist Jan 22 11:05:24.842: INFO: namespace e2e-tests-kubectl-4j9sf deletion completed in 24.296083048s • [SLOW TEST:25.027 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:05:24.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 22 11:05:25.053: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1719cc5c-3d07-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-mb5vn" to be "success or failure" Jan 22 11:05:25.070: INFO: Pod "downwardapi-volume-1719cc5c-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.918808ms Jan 22 11:05:27.201: INFO: Pod "downwardapi-volume-1719cc5c-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148001791s Jan 22 11:05:29.237: INFO: Pod "downwardapi-volume-1719cc5c-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184678948s Jan 22 11:05:31.396: INFO: Pod "downwardapi-volume-1719cc5c-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343146712s Jan 22 11:05:33.408: INFO: Pod "downwardapi-volume-1719cc5c-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.354860987s Jan 22 11:05:35.422: INFO: Pod "downwardapi-volume-1719cc5c-3d07-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.368891781s STEP: Saw pod success Jan 22 11:05:35.422: INFO: Pod "downwardapi-volume-1719cc5c-3d07-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:05:35.429: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1719cc5c-3d07-11ea-ad91-0242ac110005 container client-container: STEP: delete the pod Jan 22 11:05:35.601: INFO: Waiting for pod downwardapi-volume-1719cc5c-3d07-11ea-ad91-0242ac110005 to disappear Jan 22 11:05:36.624: INFO: Pod downwardapi-volume-1719cc5c-3d07-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:05:36.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mb5vn" for this suite. Jan 22 11:05:42.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:05:43.101: INFO: namespace: e2e-tests-projected-mb5vn, resource: bindings, ignored listing per whitelist Jan 22 11:05:43.154: INFO: namespace e2e-tests-projected-mb5vn deletion completed in 6.512385529s • [SLOW TEST:18.312 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:05:43.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 22 11:05:43.529: INFO: Waiting up to 5m0s for pod "downwardapi-volume-222353b5-3d07-11ea-ad91-0242ac110005" in namespace "e2e-tests-downward-api-tljw8" to be "success or failure" Jan 22 11:05:43.545: INFO: Pod "downwardapi-volume-222353b5-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.502642ms Jan 22 11:05:45.566: INFO: Pod "downwardapi-volume-222353b5-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037439211s Jan 22 11:05:47.583: INFO: Pod "downwardapi-volume-222353b5-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054583653s Jan 22 11:05:49.752: INFO: Pod "downwardapi-volume-222353b5-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223705348s Jan 22 11:05:51.770: INFO: Pod "downwardapi-volume-222353b5-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.24111477s Jan 22 11:05:53.782: INFO: Pod "downwardapi-volume-222353b5-3d07-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.253469088s STEP: Saw pod success Jan 22 11:05:53.782: INFO: Pod "downwardapi-volume-222353b5-3d07-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:05:53.987: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-222353b5-3d07-11ea-ad91-0242ac110005 container client-container: STEP: delete the pod Jan 22 11:05:54.231: INFO: Waiting for pod downwardapi-volume-222353b5-3d07-11ea-ad91-0242ac110005 to disappear Jan 22 11:05:54.242: INFO: Pod downwardapi-volume-222353b5-3d07-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:05:54.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tljw8" for this suite. Jan 22 11:06:00.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:06:00.414: INFO: namespace: e2e-tests-downward-api-tljw8, resource: bindings, ignored listing per whitelist Jan 22 11:06:00.434: INFO: namespace e2e-tests-downward-api-tljw8 deletion completed in 6.1786547s • [SLOW TEST:17.279 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:06:00.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:06:10.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-2wx4q" for this suite. Jan 22 11:07:04.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:07:05.021: INFO: namespace: e2e-tests-kubelet-test-2wx4q, resource: bindings, ignored listing per whitelist Jan 22 11:07:05.074: INFO: namespace e2e-tests-kubelet-test-2wx4q deletion completed in 54.178261757s • [SLOW TEST:64.640 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:07:05.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 22 11:07:05.307: INFO: Waiting up to 5m0s for pod "pod-52e2cbc3-3d07-11ea-ad91-0242ac110005" in namespace "e2e-tests-emptydir-d4nls" to be "success or failure" Jan 22 11:07:05.319: INFO: Pod "pod-52e2cbc3-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.463356ms Jan 22 11:07:07.389: INFO: Pod "pod-52e2cbc3-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081485863s Jan 22 11:07:09.407: INFO: Pod "pod-52e2cbc3-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09998491s Jan 22 11:07:11.439: INFO: Pod "pod-52e2cbc3-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131605708s Jan 22 11:07:13.448: INFO: Pod "pod-52e2cbc3-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.140463557s Jan 22 11:07:16.076: INFO: Pod "pod-52e2cbc3-3d07-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.769049241s STEP: Saw pod success Jan 22 11:07:16.076: INFO: Pod "pod-52e2cbc3-3d07-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:07:16.087: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-52e2cbc3-3d07-11ea-ad91-0242ac110005 container test-container: STEP: delete the pod Jan 22 11:07:16.377: INFO: Waiting for pod pod-52e2cbc3-3d07-11ea-ad91-0242ac110005 to disappear Jan 22 11:07:16.399: INFO: Pod pod-52e2cbc3-3d07-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:07:16.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-d4nls" for this suite. Jan 22 11:07:22.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:07:22.645: INFO: namespace: e2e-tests-emptydir-d4nls, resource: bindings, ignored listing per whitelist Jan 22 11:07:22.688: INFO: namespace e2e-tests-emptydir-d4nls deletion completed in 6.27643502s • [SLOW TEST:17.615 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:07:22.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 22 11:07:23.077: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d6c0b64-3d07-11ea-ad91-0242ac110005" in namespace "e2e-tests-downward-api-2ktz6" to be "success or failure" Jan 22 11:07:23.091: INFO: Pod "downwardapi-volume-5d6c0b64-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.966143ms Jan 22 11:07:25.525: INFO: Pod "downwardapi-volume-5d6c0b64-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.447619778s Jan 22 11:07:27.574: INFO: Pod "downwardapi-volume-5d6c0b64-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.496579898s Jan 22 11:07:29.586: INFO: Pod "downwardapi-volume-5d6c0b64-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.509179886s Jan 22 11:07:31.603: INFO: Pod "downwardapi-volume-5d6c0b64-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.525786504s Jan 22 11:07:33.659: INFO: Pod "downwardapi-volume-5d6c0b64-3d07-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.581982334s STEP: Saw pod success Jan 22 11:07:33.659: INFO: Pod "downwardapi-volume-5d6c0b64-3d07-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:07:33.673: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5d6c0b64-3d07-11ea-ad91-0242ac110005 container client-container: STEP: delete the pod Jan 22 11:07:33.754: INFO: Waiting for pod downwardapi-volume-5d6c0b64-3d07-11ea-ad91-0242ac110005 to disappear Jan 22 11:07:33.804: INFO: Pod downwardapi-volume-5d6c0b64-3d07-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:07:33.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2ktz6" for this suite. Jan 22 11:07:39.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:07:39.996: INFO: namespace: e2e-tests-downward-api-2ktz6, resource: bindings, ignored listing per whitelist Jan 22 11:07:40.036: INFO: namespace e2e-tests-downward-api-2ktz6 deletion completed in 6.223082274s • [SLOW TEST:17.348 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:07:40.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 22 11:07:58.438: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 22 11:07:58.499: INFO: Pod pod-with-prestop-http-hook still exists Jan 22 11:08:00.499: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 22 11:08:00.550: INFO: Pod pod-with-prestop-http-hook still exists Jan 22 11:08:02.499: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 22 11:08:02.528: INFO: Pod pod-with-prestop-http-hook still exists Jan 22 11:08:04.499: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 22 11:08:04.520: INFO: Pod pod-with-prestop-http-hook still exists Jan 22 11:08:06.500: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 22 11:08:06.524: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:08:06.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mzch8" for this suite. Jan 22 11:08:30.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:08:30.687: INFO: namespace: e2e-tests-container-lifecycle-hook-mzch8, resource: bindings, ignored listing per whitelist Jan 22 11:08:30.759: INFO: namespace e2e-tests-container-lifecycle-hook-mzch8 deletion completed in 24.17205903s • [SLOW TEST:50.722 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:08:30.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 22 11:08:30.949: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85ef2a36-3d07-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-d5lgx" to be "success or failure" Jan 22 11:08:30.963: INFO: Pod "downwardapi-volume-85ef2a36-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.465131ms Jan 22 11:08:32.990: INFO: Pod "downwardapi-volume-85ef2a36-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041047727s Jan 22 11:08:35.006: INFO: Pod "downwardapi-volume-85ef2a36-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056429609s Jan 22 11:08:37.042: INFO: Pod "downwardapi-volume-85ef2a36-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09219602s Jan 22 11:08:39.879: INFO: Pod "downwardapi-volume-85ef2a36-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.929533058s Jan 22 11:08:41.899: INFO: Pod "downwardapi-volume-85ef2a36-3d07-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.949748836s STEP: Saw pod success Jan 22 11:08:41.899: INFO: Pod "downwardapi-volume-85ef2a36-3d07-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:08:41.906: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-85ef2a36-3d07-11ea-ad91-0242ac110005 container client-container: STEP: delete the pod Jan 22 11:08:42.202: INFO: Waiting for pod downwardapi-volume-85ef2a36-3d07-11ea-ad91-0242ac110005 to disappear Jan 22 11:08:42.251: INFO: Pod downwardapi-volume-85ef2a36-3d07-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:08:42.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d5lgx" for this suite. Jan 22 11:08:48.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:08:48.427: INFO: namespace: e2e-tests-projected-d5lgx, resource: bindings, ignored listing per whitelist Jan 22 11:08:48.472: INFO: namespace e2e-tests-projected-d5lgx deletion completed in 6.203335971s • [SLOW TEST:17.713 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:08:48.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 22 11:08:48.803: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:09:05.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-cbb56" for this suite. Jan 22 11:09:13.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:09:13.874: INFO: namespace: e2e-tests-init-container-cbb56, resource: bindings, ignored listing per whitelist Jan 22 11:09:14.033: INFO: namespace e2e-tests-init-container-cbb56 deletion completed in 8.447901097s • [SLOW TEST:25.561 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:09:14.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 22 11:09:14.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 22 11:09:14.496: INFO: stderr: "" Jan 22 11:09:14.496: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:09:14.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lq2f5" for this suite. Jan 22 11:09:20.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:09:20.777: INFO: namespace: e2e-tests-kubectl-lq2f5, resource: bindings, ignored listing per whitelist Jan 22 11:09:20.900: INFO: namespace e2e-tests-kubectl-lq2f5 deletion completed in 6.391174911s • [SLOW TEST:6.866 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:09:20.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 22 11:09:29.894: INFO: Successfully updated pod "labelsupdatea3d2c0b8-3d07-11ea-ad91-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:09:34.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-t4twk" for this suite. Jan 22 11:09:58.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:09:58.317: INFO: namespace: e2e-tests-projected-t4twk, resource: bindings, ignored listing per whitelist Jan 22 11:09:58.343: INFO: namespace e2e-tests-projected-t4twk deletion completed in 24.310378963s • [SLOW TEST:37.443 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:09:58.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 22 11:09:58.617: INFO: Waiting up to 5m0s for pod "pod-ba2fd95b-3d07-11ea-ad91-0242ac110005" in namespace "e2e-tests-emptydir-w5nmb" to be "success or failure" Jan 22 11:09:58.633: INFO: Pod "pod-ba2fd95b-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.724974ms Jan 22 11:10:00.678: INFO: Pod "pod-ba2fd95b-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061470042s Jan 22 11:10:02.702: INFO: Pod "pod-ba2fd95b-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084660065s Jan 22 11:10:05.097: INFO: Pod "pod-ba2fd95b-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.480046192s Jan 22 11:10:07.585: INFO: Pod "pod-ba2fd95b-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.967770965s Jan 22 11:10:09.601: INFO: Pod "pod-ba2fd95b-3d07-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.98407412s STEP: Saw pod success Jan 22 11:10:09.601: INFO: Pod "pod-ba2fd95b-3d07-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:10:09.606: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ba2fd95b-3d07-11ea-ad91-0242ac110005 container test-container: STEP: delete the pod Jan 22 11:10:09.878: INFO: Waiting for pod pod-ba2fd95b-3d07-11ea-ad91-0242ac110005 to disappear Jan 22 11:10:09.921: INFO: Pod pod-ba2fd95b-3d07-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:10:09.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-w5nmb" for this suite. Jan 22 11:10:16.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:10:16.209: INFO: namespace: e2e-tests-emptydir-w5nmb, resource: bindings, ignored listing per whitelist Jan 22 11:10:16.218: INFO: namespace e2e-tests-emptydir-w5nmb deletion completed in 6.285784655s • [SLOW TEST:17.875 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:10:16.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-c4d081e3-3d07-11ea-ad91-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 22 11:10:16.499: INFO: Waiting up to 5m0s for pod "pod-configmaps-c4d2e381-3d07-11ea-ad91-0242ac110005" in namespace "e2e-tests-configmap-ccz2r" to be "success or failure" Jan 22 11:10:16.577: INFO: Pod "pod-configmaps-c4d2e381-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 77.451301ms Jan 22 11:10:18.608: INFO: Pod "pod-configmaps-c4d2e381-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108797842s Jan 22 11:10:20.622: INFO: Pod "pod-configmaps-c4d2e381-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122537006s Jan 22 11:10:22.640: INFO: Pod "pod-configmaps-c4d2e381-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140538383s Jan 22 11:10:25.286: INFO: Pod "pod-configmaps-c4d2e381-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.787096866s Jan 22 11:10:27.298: INFO: Pod "pod-configmaps-c4d2e381-3d07-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.799282594s STEP: Saw pod success Jan 22 11:10:27.298: INFO: Pod "pod-configmaps-c4d2e381-3d07-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:10:27.470: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c4d2e381-3d07-11ea-ad91-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 22 11:10:27.570: INFO: Waiting for pod pod-configmaps-c4d2e381-3d07-11ea-ad91-0242ac110005 to disappear Jan 22 11:10:27.615: INFO: Pod pod-configmaps-c4d2e381-3d07-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:10:27.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ccz2r" for this suite. Jan 22 11:10:33.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:10:33.795: INFO: namespace: e2e-tests-configmap-ccz2r, resource: bindings, ignored listing per whitelist Jan 22 11:10:33.862: INFO: namespace e2e-tests-configmap-ccz2r deletion completed in 6.240423653s • [SLOW TEST:17.644 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:10:33.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-cf73375a-3d07-11ea-ad91-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 22 11:10:34.332: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cf77d3f7-3d07-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-j9cb5" to be "success or failure" Jan 22 11:10:34.389: INFO: Pod "pod-projected-configmaps-cf77d3f7-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 56.838596ms Jan 22 11:10:36.401: INFO: Pod "pod-projected-configmaps-cf77d3f7-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069357652s Jan 22 11:10:38.575: INFO: Pod "pod-projected-configmaps-cf77d3f7-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243231567s Jan 22 11:10:40.599: INFO: Pod "pod-projected-configmaps-cf77d3f7-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.26727295s Jan 22 11:10:42.622: INFO: Pod "pod-projected-configmaps-cf77d3f7-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.290091783s Jan 22 11:10:44.634: INFO: Pod "pod-projected-configmaps-cf77d3f7-3d07-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.302120329s STEP: Saw pod success Jan 22 11:10:44.634: INFO: Pod "pod-projected-configmaps-cf77d3f7-3d07-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:10:44.638: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-cf77d3f7-3d07-11ea-ad91-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 22 11:10:44.795: INFO: Waiting for pod pod-projected-configmaps-cf77d3f7-3d07-11ea-ad91-0242ac110005 to disappear Jan 22 11:10:44.819: INFO: Pod pod-projected-configmaps-cf77d3f7-3d07-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:10:44.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j9cb5" for this suite. Jan 22 11:10:50.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:10:51.216: INFO: namespace: e2e-tests-projected-j9cb5, resource: bindings, ignored listing per whitelist Jan 22 11:10:51.234: INFO: namespace e2e-tests-projected-j9cb5 deletion completed in 6.395527974s • [SLOW TEST:17.370 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:10:51.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-d9c05b54-3d07-11ea-ad91-0242ac110005 STEP: Creating a pod to test consume secrets Jan 22 11:10:51.584: INFO: Waiting up to 5m0s for pod "pod-secrets-d9c1ad7b-3d07-11ea-ad91-0242ac110005" in namespace "e2e-tests-secrets-xlpmw" to be "success or failure" Jan 22 11:10:51.596: INFO: Pod "pod-secrets-d9c1ad7b-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.395226ms Jan 22 11:10:53.624: INFO: Pod "pod-secrets-d9c1ad7b-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040135073s Jan 22 11:10:55.639: INFO: Pod "pod-secrets-d9c1ad7b-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054515929s Jan 22 11:10:58.147: INFO: Pod "pod-secrets-d9c1ad7b-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.562655703s Jan 22 11:11:00.168: INFO: Pod "pod-secrets-d9c1ad7b-3d07-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.583778937s Jan 22 11:11:02.194: INFO: Pod "pod-secrets-d9c1ad7b-3d07-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.610219962s STEP: Saw pod success Jan 22 11:11:02.195: INFO: Pod "pod-secrets-d9c1ad7b-3d07-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:11:02.202: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d9c1ad7b-3d07-11ea-ad91-0242ac110005 container secret-env-test: STEP: delete the pod Jan 22 11:11:02.270: INFO: Waiting for pod pod-secrets-d9c1ad7b-3d07-11ea-ad91-0242ac110005 to disappear Jan 22 11:11:02.392: INFO: Pod pod-secrets-d9c1ad7b-3d07-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:11:02.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xlpmw" for this suite. Jan 22 11:11:08.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:11:09.419: INFO: namespace: e2e-tests-secrets-xlpmw, resource: bindings, ignored listing per whitelist Jan 22 11:11:09.452: INFO: namespace e2e-tests-secrets-xlpmw deletion completed in 7.040498021s • [SLOW TEST:18.217 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:11:09.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-hp2sc STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 22 11:11:09.584: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 22 11:11:45.762: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-hp2sc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 11:11:45.763: INFO: >>> kubeConfig: /root/.kube/config I0122 11:11:45.866141 8 log.go:172] (0xc00090ec60) (0xc002311220) Create stream I0122 11:11:45.866261 8 log.go:172] (0xc00090ec60) (0xc002311220) Stream added, broadcasting: 1 I0122 11:11:45.875963 8 log.go:172] (0xc00090ec60) Reply frame received for 1 I0122 11:11:45.876100 8 log.go:172] (0xc00090ec60) (0xc000027f40) Create stream I0122 11:11:45.876130 8 log.go:172] (0xc00090ec60) (0xc000027f40) Stream added, broadcasting: 3 I0122 11:11:45.878172 8 log.go:172] (0xc00090ec60) Reply frame received for 3 I0122 11:11:45.878205 8 log.go:172] (0xc00090ec60) (0xc0023112c0) Create stream I0122 11:11:45.878214 8 log.go:172] (0xc00090ec60) (0xc0023112c0) Stream added, broadcasting: 5 I0122 11:11:45.879567 8 log.go:172] (0xc00090ec60) Reply frame received for 5 I0122 11:11:47.073594 8 log.go:172] (0xc00090ec60) Data frame received for 3 I0122 11:11:47.073718 8 log.go:172] (0xc000027f40) (3) Data frame handling I0122 11:11:47.073772 8 log.go:172] (0xc000027f40) (3) Data frame sent I0122 11:11:47.206763 8 log.go:172] (0xc00090ec60) Data frame received for 1 I0122 11:11:47.206844 8 log.go:172] (0xc002311220) (1) Data frame handling I0122 11:11:47.206902 8 log.go:172] (0xc002311220) (1) Data frame sent I0122 11:11:47.207290 8 log.go:172] (0xc00090ec60) (0xc002311220) Stream removed, broadcasting: 1 I0122 11:11:47.207375 8 log.go:172] (0xc00090ec60) (0xc000027f40) Stream removed, broadcasting: 3 I0122 11:11:47.207490 8 log.go:172] (0xc00090ec60) (0xc0023112c0) Stream removed, broadcasting: 5 I0122 11:11:47.207910 8 log.go:172] (0xc00090ec60) Go away received I0122 11:11:47.207975 8 log.go:172] (0xc00090ec60) (0xc002311220) Stream removed, broadcasting: 1 I0122 11:11:47.208000 8 log.go:172] (0xc00090ec60) (0xc000027f40) Stream removed, broadcasting: 3 I0122 11:11:47.208018 8 log.go:172] (0xc00090ec60) (0xc0023112c0) Stream removed, broadcasting: 5 Jan 22 11:11:47.208: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:11:47.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-hp2sc" for this suite. Jan 22 11:12:11.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:12:11.333: INFO: namespace: e2e-tests-pod-network-test-hp2sc, resource: bindings, ignored listing per whitelist Jan 22 11:12:11.517: INFO: namespace e2e-tests-pod-network-test-hp2sc deletion completed in 24.29374234s • [SLOW TEST:62.065 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:12:11.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 22 11:12:11.747: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 22 11:12:16.981: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 22 11:12:21.002: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 22 11:12:21.086: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-vp4jf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vp4jf/deployments/test-cleanup-deployment,UID:0f1253f5-3d08-11ea-a994-fa163e34d433,ResourceVersion:19067099,Generation:1,CreationTimestamp:2020-01-22 11:12:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jan 22 11:12:21.089: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:12:21.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-vp4jf" for this suite. Jan 22 11:12:29.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:12:30.005: INFO: namespace: e2e-tests-deployment-vp4jf, resource: bindings, ignored listing per whitelist Jan 22 11:12:30.030: INFO: namespace e2e-tests-deployment-vp4jf deletion completed in 8.916050539s • [SLOW TEST:18.513 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:12:30.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 22 11:12:30.403: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14a9e992-3d08-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-v25kq" to be "success or failure" Jan 22 11:12:30.458: INFO: Pod "downwardapi-volume-14a9e992-3d08-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 55.207262ms Jan 22 11:12:32.758: INFO: Pod "downwardapi-volume-14a9e992-3d08-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.355304184s Jan 22 11:12:34.792: INFO: Pod "downwardapi-volume-14a9e992-3d08-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.389739124s Jan 22 11:12:36.817: INFO: Pod "downwardapi-volume-14a9e992-3d08-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414427149s Jan 22 11:12:38.840: INFO: Pod "downwardapi-volume-14a9e992-3d08-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.437216579s STEP: Saw pod success Jan 22 11:12:38.840: INFO: Pod "downwardapi-volume-14a9e992-3d08-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:12:38.852: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-14a9e992-3d08-11ea-ad91-0242ac110005 container client-container: STEP: delete the pod Jan 22 11:12:38.928: INFO: Waiting for pod downwardapi-volume-14a9e992-3d08-11ea-ad91-0242ac110005 to disappear Jan 22 11:12:38.983: INFO: Pod downwardapi-volume-14a9e992-3d08-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:12:38.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-v25kq" for this suite. Jan 22 11:12:45.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:12:45.179: INFO: namespace: e2e-tests-projected-v25kq, resource: bindings, ignored listing per whitelist Jan 22 11:12:45.223: INFO: namespace e2e-tests-projected-v25kq deletion completed in 6.21671663s • [SLOW TEST:15.192 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:12:45.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 22 11:12:56.119: INFO: Successfully updated pod "labelsupdate1da4e131-3d08-11ea-ad91-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:12:58.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-29jcn" for this suite. Jan 22 11:13:22.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:13:22.833: INFO: namespace: e2e-tests-downward-api-29jcn, resource: bindings, ignored listing per whitelist Jan 22 11:13:22.844: INFO: namespace e2e-tests-downward-api-29jcn deletion completed in 24.398980415s • [SLOW TEST:37.620 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:13:22.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Jan 22 11:13:23.103: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 22 11:13:23.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r9zp5' Jan 22 11:13:25.228: INFO: stderr: "" Jan 22 11:13:25.228: INFO: stdout: "service/redis-slave created\n" Jan 22 11:13:25.229: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 22 11:13:25.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r9zp5' Jan 22 11:13:25.682: INFO: stderr: "" Jan 22 11:13:25.682: INFO: stdout: "service/redis-master created\n" Jan 22 11:13:25.683: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 22 11:13:25.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r9zp5' Jan 22 11:13:26.033: INFO: stderr: "" Jan 22 11:13:26.034: INFO: stdout: "service/frontend created\n" Jan 22 11:13:26.034: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 22 11:13:26.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r9zp5' Jan 22 11:13:26.386: INFO: stderr: "" Jan 22 11:13:26.386: INFO: stdout: "deployment.extensions/frontend created\n" Jan 22 11:13:26.386: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 22 11:13:26.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r9zp5' Jan 22 11:13:26.773: INFO: stderr: "" Jan 22 11:13:26.773: INFO: stdout: "deployment.extensions/redis-master created\n" Jan 22 11:13:26.774: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 22 11:13:26.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r9zp5' Jan 22 11:13:27.200: INFO: stderr: "" Jan 22 11:13:27.200: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Jan 22 11:13:27.200: INFO: Waiting for all frontend pods to be Running. Jan 22 11:13:52.252: INFO: Waiting for frontend to serve content. Jan 22 11:13:53.966: INFO: Trying to add a new entry to the guestbook. Jan 22 11:13:54.011: INFO: Verifying that added entry can be retrieved. Jan 22 11:13:54.030: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources Jan 22 11:13:59.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-r9zp5' Jan 22 11:13:59.347: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 11:13:59.347: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 22 11:13:59.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-r9zp5' Jan 22 11:13:59.642: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 11:13:59.642: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 22 11:13:59.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-r9zp5' Jan 22 11:13:59.839: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 11:13:59.839: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 22 11:13:59.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-r9zp5' Jan 22 11:14:00.005: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 11:14:00.005: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 22 11:14:00.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-r9zp5' Jan 22 11:14:00.355: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 11:14:00.355: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 22 11:14:00.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-r9zp5' Jan 22 11:14:00.682: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 11:14:00.682: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:14:00.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r9zp5" for this suite. Jan 22 11:14:44.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:14:44.893: INFO: namespace: e2e-tests-kubectl-r9zp5, resource: bindings, ignored listing per whitelist Jan 22 11:14:45.017: INFO: namespace e2e-tests-kubectl-r9zp5 deletion completed in 44.321630153s • [SLOW TEST:82.172 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:14:45.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-5m955 in namespace e2e-tests-proxy-kzgtm I0122 11:14:45.617742 8 runners.go:184] Created replication controller with name: proxy-service-5m955, namespace: e2e-tests-proxy-kzgtm, replica count: 1 I0122 11:14:46.668850 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 11:14:47.669274 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 11:14:48.669499 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 11:14:49.669790 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 11:14:50.670007 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 11:14:51.670331 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 11:14:52.670539 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 11:14:53.670876 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0122 11:14:54.671416 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0122 11:14:55.671782 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0122 11:14:56.672108 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0122 11:14:57.672463 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0122 11:14:58.672808 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0122 11:14:59.673125 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0122 11:15:00.673396 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0122 11:15:01.673875 8 runners.go:184] proxy-service-5m955 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 22 11:15:01.687: INFO: setup took 16.295006974s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 22 11:15:01.740: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-kzgtm/pods/http:proxy-service-5m955-mqvjt:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 22 11:15:22.125: INFO: Pod name wrapped-volume-race-7aff3301-3d08-11ea-ad91-0242ac110005: Found 0 pods out of 5 Jan 22 11:15:27.158: INFO: Pod name wrapped-volume-race-7aff3301-3d08-11ea-ad91-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7aff3301-3d08-11ea-ad91-0242ac110005 in namespace e2e-tests-emptydir-wrapper-vrxz6, will wait for the garbage collector to delete the pods Jan 22 11:18:01.384: INFO: Deleting ReplicationController wrapped-volume-race-7aff3301-3d08-11ea-ad91-0242ac110005 took: 47.721202ms Jan 22 11:18:01.685: INFO: Terminating ReplicationController wrapped-volume-race-7aff3301-3d08-11ea-ad91-0242ac110005 pods took: 300.788676ms STEP: Creating RC which spawns configmap-volume pods Jan 22 11:18:53.145: INFO: Pod name wrapped-volume-race-f8b4ee35-3d08-11ea-ad91-0242ac110005: Found 0 pods out of 5 Jan 22 11:18:58.257: INFO: Pod name wrapped-volume-race-f8b4ee35-3d08-11ea-ad91-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f8b4ee35-3d08-11ea-ad91-0242ac110005 in namespace e2e-tests-emptydir-wrapper-vrxz6, will wait for the garbage collector to delete the pods Jan 22 11:20:52.465: INFO: Deleting ReplicationController wrapped-volume-race-f8b4ee35-3d08-11ea-ad91-0242ac110005 took: 31.506418ms Jan 22 11:20:52.966: INFO: Terminating ReplicationController wrapped-volume-race-f8b4ee35-3d08-11ea-ad91-0242ac110005 pods took: 500.917063ms STEP: Creating RC which spawns configmap-volume pods Jan 22 11:21:42.988: INFO: Pod name wrapped-volume-race-5df6b98a-3d09-11ea-ad91-0242ac110005: Found 0 pods out of 5 Jan 22 11:21:48.019: INFO: Pod name wrapped-volume-race-5df6b98a-3d09-11ea-ad91-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5df6b98a-3d09-11ea-ad91-0242ac110005 in namespace e2e-tests-emptydir-wrapper-vrxz6, will wait for the garbage collector to delete the pods Jan 22 11:24:02.217: INFO: Deleting ReplicationController wrapped-volume-race-5df6b98a-3d09-11ea-ad91-0242ac110005 took: 57.149839ms Jan 22 11:24:02.417: INFO: Terminating ReplicationController wrapped-volume-race-5df6b98a-3d09-11ea-ad91-0242ac110005 pods took: 200.739435ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:24:55.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-vrxz6" for this suite. Jan 22 11:25:03.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:25:03.253: INFO: namespace: e2e-tests-emptydir-wrapper-vrxz6, resource: bindings, ignored listing per whitelist Jan 22 11:25:03.313: INFO: namespace e2e-tests-emptydir-wrapper-vrxz6 deletion completed in 8.277888472s • [SLOW TEST:582.470 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:25:03.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 22 11:25:03.520: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 22 11:25:03.530: INFO: Waiting for terminating namespaces to be deleted... Jan 22 11:25:03.534: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Jan 22 11:25:03.555: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 22 11:25:03.555: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 22 11:25:03.555: INFO: Container coredns ready: true, restart count 0 Jan 22 11:25:03.555: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Jan 22 11:25:03.555: INFO: Container kube-proxy ready: true, restart count 0 Jan 22 11:25:03.555: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 22 11:25:03.555: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Jan 22 11:25:03.555: INFO: Container weave ready: true, restart count 0 Jan 22 11:25:03.555: INFO: Container weave-npc ready: true, restart count 0 Jan 22 11:25:03.555: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 22 11:25:03.555: INFO: Container coredns ready: true, restart count 0 Jan 22 11:25:03.555: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 22 11:25:03.555: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-hu5at5svl7ps Jan 22 11:25:03.669: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 22 11:25:03.669: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 22 11:25:03.669: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Jan 22 11:25:03.669: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps Jan 22 11:25:03.669: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps Jan 22 11:25:03.669: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Jan 22 11:25:03.669: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 22 11:25:03.669: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-d5a6b453-3d09-11ea-ad91-0242ac110005.15ec31c7da324bdc], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-ttzz2/filler-pod-d5a6b453-3d09-11ea-ad91-0242ac110005 to hunter-server-hu5at5svl7ps] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5a6b453-3d09-11ea-ad91-0242ac110005.15ec31c9f61d4089], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5a6b453-3d09-11ea-ad91-0242ac110005.15ec31ca8f08fdfb], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5a6b453-3d09-11ea-ad91-0242ac110005.15ec31cab2a06fda], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ec31cb1e2ab763], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:25:18.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-ttzz2" for this suite. Jan 22 11:25:25.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:25:26.067: INFO: namespace: e2e-tests-sched-pred-ttzz2, resource: bindings, ignored listing per whitelist Jan 22 11:25:26.123: INFO: namespace e2e-tests-sched-pred-ttzz2 deletion completed in 7.23899583s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:22.810 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:25:26.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-rfblc Jan 22 11:25:34.867: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-rfblc STEP: checking the pod's current state and verifying that restartCount is present Jan 22 11:25:34.873: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:29:36.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rfblc" for this suite. Jan 22 11:29:42.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:29:42.917: INFO: namespace: e2e-tests-container-probe-rfblc, resource: bindings, ignored listing per whitelist Jan 22 11:29:42.923: INFO: namespace e2e-tests-container-probe-rfblc deletion completed in 6.317025705s • [SLOW TEST:256.799 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:29:42.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 22 11:29:43.248: INFO: Waiting up to 5m0s for pod "downward-api-7c483748-3d0a-11ea-ad91-0242ac110005" in namespace "e2e-tests-downward-api-tzd2j" to be "success or failure" Jan 22 11:29:43.380: INFO: Pod "downward-api-7c483748-3d0a-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 132.350455ms Jan 22 11:29:45.395: INFO: Pod "downward-api-7c483748-3d0a-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147598453s Jan 22 11:29:47.412: INFO: Pod "downward-api-7c483748-3d0a-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164601631s Jan 22 11:29:49.431: INFO: Pod "downward-api-7c483748-3d0a-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183459411s Jan 22 11:29:51.489: INFO: Pod "downward-api-7c483748-3d0a-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.241326207s Jan 22 11:29:53.497: INFO: Pod "downward-api-7c483748-3d0a-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.248916782s STEP: Saw pod success Jan 22 11:29:53.497: INFO: Pod "downward-api-7c483748-3d0a-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:29:53.500: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7c483748-3d0a-11ea-ad91-0242ac110005 container dapi-container: STEP: delete the pod Jan 22 11:29:53.640: INFO: Waiting for pod downward-api-7c483748-3d0a-11ea-ad91-0242ac110005 to disappear Jan 22 11:29:53.647: INFO: Pod downward-api-7c483748-3d0a-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:29:53.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tzd2j" for this suite. Jan 22 11:29:59.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:29:59.833: INFO: namespace: e2e-tests-downward-api-tzd2j, resource: bindings, ignored listing per whitelist Jan 22 11:29:59.901: INFO: namespace e2e-tests-downward-api-tzd2j deletion completed in 6.243996448s • [SLOW TEST:16.978 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:29:59.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:31:00.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-5cc2g" for this suite. Jan 22 11:31:24.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:31:24.206: INFO: namespace: e2e-tests-container-probe-5cc2g, resource: bindings, ignored listing per whitelist Jan 22 11:31:24.274: INFO: namespace e2e-tests-container-probe-5cc2g deletion completed in 24.16969515s • [SLOW TEST:84.373 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:31:24.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 22 11:31:24.652: INFO: Waiting up to 5m0s for pod "downward-api-b8b9309e-3d0a-11ea-ad91-0242ac110005" in namespace "e2e-tests-downward-api-j9s5q" to be "success or failure" Jan 22 11:31:24.669: INFO: Pod "downward-api-b8b9309e-3d0a-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.877579ms Jan 22 11:31:26.700: INFO: Pod "downward-api-b8b9309e-3d0a-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047819615s Jan 22 11:31:28.721: INFO: Pod "downward-api-b8b9309e-3d0a-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069388193s Jan 22 11:31:30.939: INFO: Pod "downward-api-b8b9309e-3d0a-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.28667504s Jan 22 11:31:32.964: INFO: Pod "downward-api-b8b9309e-3d0a-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.311838381s STEP: Saw pod success Jan 22 11:31:32.964: INFO: Pod "downward-api-b8b9309e-3d0a-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:31:32.978: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-b8b9309e-3d0a-11ea-ad91-0242ac110005 container dapi-container: STEP: delete the pod Jan 22 11:31:33.175: INFO: Waiting for pod downward-api-b8b9309e-3d0a-11ea-ad91-0242ac110005 to disappear Jan 22 11:31:33.194: INFO: Pod downward-api-b8b9309e-3d0a-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:31:33.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-j9s5q" for this suite. Jan 22 11:31:39.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:31:39.369: INFO: namespace: e2e-tests-downward-api-j9s5q, resource: bindings, ignored listing per whitelist Jan 22 11:31:39.469: INFO: namespace e2e-tests-downward-api-j9s5q deletion completed in 6.267537623s • [SLOW TEST:15.194 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:31:39.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-c1ace688-3d0a-11ea-ad91-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-c1ace6ef-3d0a-11ea-ad91-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c1ace688-3d0a-11ea-ad91-0242ac110005 STEP: Updating configmap cm-test-opt-upd-c1ace6ef-3d0a-11ea-ad91-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-c1ace858-3d0a-11ea-ad91-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:31:56.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2hpg2" for this suite. Jan 22 11:32:22.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:32:22.102: INFO: namespace: e2e-tests-configmap-2hpg2, resource: bindings, ignored listing per whitelist Jan 22 11:32:22.259: INFO: namespace e2e-tests-configmap-2hpg2 deletion completed in 26.236931791s • [SLOW TEST:42.789 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:32:22.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 22 11:32:22.493: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db2f2c5f-3d0a-11ea-ad91-0242ac110005" in namespace "e2e-tests-downward-api-6cgcn" to be "success or failure" Jan 22 11:32:22.514: INFO: Pod "downwardapi-volume-db2f2c5f-3d0a-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.225315ms Jan 22 11:32:24.730: INFO: Pod "downwardapi-volume-db2f2c5f-3d0a-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237218072s Jan 22 11:32:26.763: INFO: Pod "downwardapi-volume-db2f2c5f-3d0a-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.269591812s Jan 22 11:32:28.945: INFO: Pod "downwardapi-volume-db2f2c5f-3d0a-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.451953735s Jan 22 11:32:31.527: INFO: Pod "downwardapi-volume-db2f2c5f-3d0a-11ea-ad91-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.034243801s Jan 22 11:32:34.110: INFO: Pod "downwardapi-volume-db2f2c5f-3d0a-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.61702906s STEP: Saw pod success Jan 22 11:32:34.110: INFO: Pod "downwardapi-volume-db2f2c5f-3d0a-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:32:34.120: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-db2f2c5f-3d0a-11ea-ad91-0242ac110005 container client-container: STEP: delete the pod Jan 22 11:32:34.626: INFO: Waiting for pod downwardapi-volume-db2f2c5f-3d0a-11ea-ad91-0242ac110005 to disappear Jan 22 11:32:34.638: INFO: Pod downwardapi-volume-db2f2c5f-3d0a-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:32:34.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6cgcn" for this suite. Jan 22 11:32:40.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:32:40.828: INFO: namespace: e2e-tests-downward-api-6cgcn, resource: bindings, ignored listing per whitelist Jan 22 11:32:40.936: INFO: namespace e2e-tests-downward-api-6cgcn deletion completed in 6.214693854s • [SLOW TEST:18.677 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:32:40.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Jan 22 11:32:41.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:32:43.155: INFO: stderr: "" Jan 22 11:32:43.155: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 22 11:32:43.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:32:43.305: INFO: stderr: "" Jan 22 11:32:43.305: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Jan 22 11:32:48.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:32:48.472: INFO: stderr: "" Jan 22 11:32:48.472: INFO: stdout: "update-demo-nautilus-4w7nz update-demo-nautilus-klj2q " Jan 22 11:32:48.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4w7nz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:32:48.625: INFO: stderr: "" Jan 22 11:32:48.625: INFO: stdout: "" Jan 22 11:32:48.625: INFO: update-demo-nautilus-4w7nz is created but not running Jan 22 11:32:53.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:32:53.803: INFO: stderr: "" Jan 22 11:32:53.803: INFO: stdout: "update-demo-nautilus-4w7nz update-demo-nautilus-klj2q " Jan 22 11:32:53.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4w7nz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:32:53.979: INFO: stderr: "" Jan 22 11:32:53.979: INFO: stdout: "" Jan 22 11:32:53.979: INFO: update-demo-nautilus-4w7nz is created but not running Jan 22 11:32:58.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:32:59.151: INFO: stderr: "" Jan 22 11:32:59.151: INFO: stdout: "update-demo-nautilus-4w7nz update-demo-nautilus-klj2q " Jan 22 11:32:59.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4w7nz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:32:59.279: INFO: stderr: "" Jan 22 11:32:59.279: INFO: stdout: "true" Jan 22 11:32:59.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4w7nz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:32:59.396: INFO: stderr: "" Jan 22 11:32:59.396: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 11:32:59.396: INFO: validating pod update-demo-nautilus-4w7nz Jan 22 11:32:59.419: INFO: got data: { "image": "nautilus.jpg" } Jan 22 11:32:59.419: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 11:32:59.419: INFO: update-demo-nautilus-4w7nz is verified up and running Jan 22 11:32:59.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-klj2q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:32:59.528: INFO: stderr: "" Jan 22 11:32:59.528: INFO: stdout: "true" Jan 22 11:32:59.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-klj2q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:32:59.662: INFO: stderr: "" Jan 22 11:32:59.662: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 11:32:59.662: INFO: validating pod update-demo-nautilus-klj2q Jan 22 11:32:59.676: INFO: got data: { "image": "nautilus.jpg" } Jan 22 11:32:59.676: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 11:32:59.676: INFO: update-demo-nautilus-klj2q is verified up and running STEP: rolling-update to new replication controller Jan 22 11:32:59.679: INFO: scanned /root for discovery docs: Jan 22 11:32:59.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:33:33.905: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 22 11:33:33.905: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 22 11:33:33.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:33:34.213: INFO: stderr: "" Jan 22 11:33:34.213: INFO: stdout: "update-demo-kitten-trgxr update-demo-kitten-w9bh4 " Jan 22 11:33:34.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-trgxr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:33:34.354: INFO: stderr: "" Jan 22 11:33:34.355: INFO: stdout: "true" Jan 22 11:33:34.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-trgxr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:33:34.487: INFO: stderr: "" Jan 22 11:33:34.487: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 22 11:33:34.487: INFO: validating pod update-demo-kitten-trgxr Jan 22 11:33:34.527: INFO: got data: { "image": "kitten.jpg" } Jan 22 11:33:34.527: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 22 11:33:34.527: INFO: update-demo-kitten-trgxr is verified up and running Jan 22 11:33:34.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-w9bh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:33:34.660: INFO: stderr: "" Jan 22 11:33:34.660: INFO: stdout: "true" Jan 22 11:33:34.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-w9bh4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkkx6' Jan 22 11:33:34.771: INFO: stderr: "" Jan 22 11:33:34.771: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 22 11:33:34.771: INFO: validating pod update-demo-kitten-w9bh4 Jan 22 11:33:34.781: INFO: got data: { "image": "kitten.jpg" } Jan 22 11:33:34.781: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 22 11:33:34.781: INFO: update-demo-kitten-w9bh4 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:33:34.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bkkx6" for this suite. Jan 22 11:34:00.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:34:00.911: INFO: namespace: e2e-tests-kubectl-bkkx6, resource: bindings, ignored listing per whitelist Jan 22 11:34:01.050: INFO: namespace e2e-tests-kubectl-bkkx6 deletion completed in 26.263868206s • [SLOW TEST:80.114 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:34:01.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jan 22 11:34:01.854: INFO: Waiting up to 5m0s for pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-vpm64" in namespace "e2e-tests-svcaccounts-szl4x" to be "success or failure" Jan 22 11:34:01.955: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-vpm64": Phase="Pending", Reason="", readiness=false. Elapsed: 100.71779ms Jan 22 11:34:03.991: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-vpm64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136508482s Jan 22 11:34:06.006: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-vpm64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151785009s Jan 22 11:34:08.018: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-vpm64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164437544s Jan 22 11:34:10.397: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-vpm64": Phase="Pending", Reason="", readiness=false. Elapsed: 8.542493113s Jan 22 11:34:12.523: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-vpm64": Phase="Pending", Reason="", readiness=false. Elapsed: 10.669214981s Jan 22 11:34:14.808: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-vpm64": Phase="Pending", Reason="", readiness=false. Elapsed: 12.953563111s Jan 22 11:34:16.825: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-vpm64": Phase="Pending", Reason="", readiness=false. Elapsed: 14.970592629s Jan 22 11:34:19.622: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-vpm64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.768065812s STEP: Saw pod success Jan 22 11:34:19.622: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-vpm64" satisfied condition "success or failure" Jan 22 11:34:19.639: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-vpm64 container token-test: STEP: delete the pod Jan 22 11:34:20.108: INFO: Waiting for pod pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-vpm64 to disappear Jan 22 11:34:20.126: INFO: Pod pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-vpm64 no longer exists STEP: Creating a pod to test consume service account root CA Jan 22 11:34:20.154: INFO: Waiting up to 5m0s for pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-cf9wk" in namespace "e2e-tests-svcaccounts-szl4x" to be "success or failure" Jan 22 11:34:20.233: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-cf9wk": Phase="Pending", Reason="", readiness=false. Elapsed: 78.818069ms Jan 22 11:34:22.401: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-cf9wk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246770173s Jan 22 11:34:24.410: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-cf9wk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.256077153s Jan 22 11:34:26.510: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-cf9wk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.355386115s Jan 22 11:34:28.805: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-cf9wk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.651125746s Jan 22 11:34:30.816: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-cf9wk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.661955151s Jan 22 11:34:32.828: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-cf9wk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.673664451s Jan 22 11:34:34.847: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-cf9wk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.69316502s STEP: Saw pod success Jan 22 11:34:34.848: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-cf9wk" satisfied condition "success or failure" Jan 22 11:34:34.864: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-cf9wk container root-ca-test: STEP: delete the pod Jan 22 11:34:35.029: INFO: Waiting for pod pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-cf9wk to disappear Jan 22 11:34:35.042: INFO: Pod pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-cf9wk no longer exists STEP: Creating a pod to test consume service account namespace Jan 22 11:34:35.065: INFO: Waiting up to 5m0s for pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-x2l2g" in namespace "e2e-tests-svcaccounts-szl4x" to be "success or failure" Jan 22 11:34:35.111: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-x2l2g": Phase="Pending", Reason="", readiness=false. Elapsed: 45.848811ms Jan 22 11:34:37.124: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-x2l2g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059353886s Jan 22 11:34:39.138: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-x2l2g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072939396s Jan 22 11:34:41.389: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-x2l2g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.324342108s Jan 22 11:34:43.409: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-x2l2g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.344346658s Jan 22 11:34:45.617: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-x2l2g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.551958143s Jan 22 11:34:47.640: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-x2l2g": Phase="Pending", Reason="", readiness=false. Elapsed: 12.575619352s Jan 22 11:34:49.840: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-x2l2g": Phase="Pending", Reason="", readiness=false. Elapsed: 14.775227233s Jan 22 11:34:52.008: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-x2l2g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.943103583s STEP: Saw pod success Jan 22 11:34:52.008: INFO: Pod "pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-x2l2g" satisfied condition "success or failure" Jan 22 11:34:52.686: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-x2l2g container namespace-test: STEP: delete the pod Jan 22 11:34:52.843: INFO: Waiting for pod pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-x2l2g to disappear Jan 22 11:34:52.873: INFO: Pod pod-service-account-166ba20a-3d0b-11ea-ad91-0242ac110005-x2l2g no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:34:52.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-szl4x" for this suite. Jan 22 11:35:00.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:35:00.973: INFO: namespace: e2e-tests-svcaccounts-szl4x, resource: bindings, ignored listing per whitelist Jan 22 11:35:01.101: INFO: namespace e2e-tests-svcaccounts-szl4x deletion completed in 8.211727339s • [SLOW TEST:60.051 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:35:01.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-39f3869f-3d0b-11ea-ad91-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 22 11:35:01.495: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-39f573e9-3d0b-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-k7tmb" to be "success or failure" Jan 22 11:35:01.511: INFO: Pod "pod-projected-configmaps-39f573e9-3d0b-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.112167ms Jan 22 11:35:03.532: INFO: Pod "pod-projected-configmaps-39f573e9-3d0b-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036888195s Jan 22 11:35:05.547: INFO: Pod "pod-projected-configmaps-39f573e9-3d0b-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051906234s Jan 22 11:35:07.565: INFO: Pod "pod-projected-configmaps-39f573e9-3d0b-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069126737s Jan 22 11:35:09.579: INFO: Pod "pod-projected-configmaps-39f573e9-3d0b-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084046353s Jan 22 11:35:11.592: INFO: Pod "pod-projected-configmaps-39f573e9-3d0b-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096137731s STEP: Saw pod success Jan 22 11:35:11.592: INFO: Pod "pod-projected-configmaps-39f573e9-3d0b-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:35:11.599: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-39f573e9-3d0b-11ea-ad91-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 22 11:35:11.824: INFO: Waiting for pod pod-projected-configmaps-39f573e9-3d0b-11ea-ad91-0242ac110005 to disappear Jan 22 11:35:11.854: INFO: Pod pod-projected-configmaps-39f573e9-3d0b-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:35:11.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k7tmb" for this suite. Jan 22 11:35:18.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:35:18.257: INFO: namespace: e2e-tests-projected-k7tmb, resource: bindings, ignored listing per whitelist Jan 22 11:35:18.326: INFO: namespace e2e-tests-projected-k7tmb deletion completed in 6.453591413s • [SLOW TEST:17.224 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:35:18.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 22 11:35:18.578: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:35:37.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-rpkl4" for this suite. Jan 22 11:35:45.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:35:45.327: INFO: namespace: e2e-tests-init-container-rpkl4, resource: bindings, ignored listing per whitelist Jan 22 11:35:45.436: INFO: namespace e2e-tests-init-container-rpkl4 deletion completed in 8.23676787s • [SLOW TEST:27.109 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:35:45.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 22 11:35:45.609: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 22 11:35:50.783: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 22 11:35:54.822: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 22 11:35:56.851: INFO: Creating deployment "test-rollover-deployment" Jan 22 11:35:56.903: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 22 11:35:59.042: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 22 11:35:59.056: INFO: Ensure that both replica sets have 1 created replica Jan 22 11:35:59.063: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 22 11:35:59.076: INFO: Updating deployment test-rollover-deployment Jan 22 11:35:59.076: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 22 11:36:01.603: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 22 11:36:02.010: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 22 11:36:02.032: INFO: all replica sets need to contain the pod-template-hash label Jan 22 11:36:02.032: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289759, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 11:36:04.067: INFO: all replica sets need to contain the pod-template-hash label Jan 22 11:36:04.067: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289759, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 11:36:06.379: INFO: all replica sets need to contain the pod-template-hash label Jan 22 11:36:06.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289759, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 11:36:08.065: INFO: all replica sets need to contain the pod-template-hash label Jan 22 11:36:08.065: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289767, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 11:36:10.058: INFO: all replica sets need to contain the pod-template-hash label Jan 22 11:36:10.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289767, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 11:36:12.066: INFO: all replica sets need to contain the pod-template-hash label Jan 22 11:36:12.067: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289767, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 11:36:14.062: INFO: all replica sets need to contain the pod-template-hash label Jan 22 11:36:14.062: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289767, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 11:36:16.062: INFO: all replica sets need to contain the pod-template-hash label Jan 22 11:36:16.062: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289767, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 11:36:18.049: INFO: all replica sets need to contain the pod-template-hash label Jan 22 11:36:18.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289767, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715289757, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 22 11:36:20.134: INFO: Jan 22 11:36:20.134: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 22 11:36:20.155: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-47fsr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-47fsr/deployments/test-rollover-deployment,UID:5afcd797-3d0b-11ea-a994-fa163e34d433,ResourceVersion:19070109,Generation:2,CreationTimestamp:2020-01-22 11:35:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-22 11:35:57 +0000 UTC 2020-01-22 11:35:57 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-22 11:36:18 +0000 UTC 2020-01-22 11:35:57 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 22 11:36:20.165: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-47fsr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-47fsr/replicasets/test-rollover-deployment-5b8479fdb6,UID:5c4eb3e3-3d0b-11ea-a994-fa163e34d433,ResourceVersion:19070100,Generation:2,CreationTimestamp:2020-01-22 11:35:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 5afcd797-3d0b-11ea-a994-fa163e34d433 0xc001af0907 0xc001af0908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 22 11:36:20.165: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 22 11:36:20.165: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-47fsr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-47fsr/replicasets/test-rollover-controller,UID:543e8625-3d0b-11ea-a994-fa163e34d433,ResourceVersion:19070108,Generation:2,CreationTimestamp:2020-01-22 11:35:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 5afcd797-3d0b-11ea-a994-fa163e34d433 0xc001af072f 0xc001af0740}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 22 11:36:20.166: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-47fsr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-47fsr/replicasets/test-rollover-deployment-58494b7559,UID:5b0c72d7-3d0b-11ea-a994-fa163e34d433,ResourceVersion:19070068,Generation:2,CreationTimestamp:2020-01-22 11:35:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 5afcd797-3d0b-11ea-a994-fa163e34d433 0xc001af0837 0xc001af0838}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 22 11:36:20.176: INFO: Pod "test-rollover-deployment-5b8479fdb6-58fg4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-58fg4,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-47fsr,SelfLink:/api/v1/namespaces/e2e-tests-deployment-47fsr/pods/test-rollover-deployment-5b8479fdb6-58fg4,UID:5c6a0bcb-3d0b-11ea-a994-fa163e34d433,ResourceVersion:19070085,Generation:0,CreationTimestamp:2020-01-22 11:35:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 5c4eb3e3-3d0b-11ea-a994-fa163e34d433 0xc001da1e47 0xc001da1e48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-v4wwz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-v4wwz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-v4wwz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001df65e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001df6890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:35:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:36:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:36:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:35:59 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-22 11:35:59 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-22 11:36:07 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://351cdaf258f34e1b30252e5f0762c7b560c244474494881bb9a75d0b4ebd5986}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:36:20.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-47fsr" for this suite. Jan 22 11:36:30.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:36:30.324: INFO: namespace: e2e-tests-deployment-47fsr, resource: bindings, ignored listing per whitelist Jan 22 11:36:30.424: INFO: namespace e2e-tests-deployment-47fsr deletion completed in 10.239149145s • [SLOW TEST:44.987 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:36:30.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:36:41.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-9zfvz" for this suite. Jan 22 11:37:37.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:37:37.319: INFO: namespace: e2e-tests-kubelet-test-9zfvz, resource: bindings, ignored listing per whitelist Jan 22 11:37:37.389: INFO: namespace e2e-tests-kubelet-test-9zfvz deletion completed in 56.219935006s • [SLOW TEST:66.965 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:37:37.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 22 11:37:48.151: INFO: Successfully updated pod "pod-update-activedeadlineseconds-96fb3d40-3d0b-11ea-ad91-0242ac110005" Jan 22 11:37:48.151: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-96fb3d40-3d0b-11ea-ad91-0242ac110005" in namespace "e2e-tests-pods-zqplg" to be "terminated due to deadline exceeded" Jan 22 11:37:48.166: INFO: Pod "pod-update-activedeadlineseconds-96fb3d40-3d0b-11ea-ad91-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 14.332298ms Jan 22 11:37:50.184: INFO: Pod "pod-update-activedeadlineseconds-96fb3d40-3d0b-11ea-ad91-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.033129092s Jan 22 11:37:50.185: INFO: Pod "pod-update-activedeadlineseconds-96fb3d40-3d0b-11ea-ad91-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:37:50.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-zqplg" for this suite. Jan 22 11:37:56.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:37:56.627: INFO: namespace: e2e-tests-pods-zqplg, resource: bindings, ignored listing per whitelist Jan 22 11:37:56.647: INFO: namespace e2e-tests-pods-zqplg deletion completed in 6.437652711s • [SLOW TEST:19.258 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:37:56.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 22 11:38:07.564: INFO: Successfully updated pod "annotationupdatea283673c-3d0b-11ea-ad91-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:38:09.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-28qvn" for this suite. Jan 22 11:38:33.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:38:33.899: INFO: namespace: e2e-tests-downward-api-28qvn, resource: bindings, ignored listing per whitelist Jan 22 11:38:33.937: INFO: namespace e2e-tests-downward-api-28qvn deletion completed in 24.280567841s • [SLOW TEST:37.290 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:38:33.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-qf244 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jan 22 11:38:34.359: INFO: Found 0 stateful pods, waiting for 3 Jan 22 11:38:44.378: INFO: Found 2 stateful pods, waiting for 3 Jan 22 11:38:55.358: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 11:38:55.358: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 22 11:38:55.358: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 22 11:39:04.381: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 11:39:04.381: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 22 11:39:04.381: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 22 11:39:04.432: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 22 11:39:14.823: INFO: Updating stateful set ss2 Jan 22 11:39:14.852: INFO: Waiting for Pod e2e-tests-statefulset-qf244/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 22 11:39:26.363: INFO: Found 2 stateful pods, waiting for 3 Jan 22 11:39:36.390: INFO: Found 2 stateful pods, waiting for 3 Jan 22 11:39:46.386: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 11:39:46.386: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 22 11:39:46.386: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 22 11:39:56.383: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 11:39:56.383: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 22 11:39:56.383: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 22 11:39:56.433: INFO: Updating stateful set ss2 Jan 22 11:39:56.483: INFO: Waiting for Pod e2e-tests-statefulset-qf244/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 11:40:06.526: INFO: Waiting for Pod e2e-tests-statefulset-qf244/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 11:40:16.530: INFO: Updating stateful set ss2 Jan 22 11:40:16.582: INFO: Waiting for StatefulSet e2e-tests-statefulset-qf244/ss2 to complete update Jan 22 11:40:16.582: INFO: Waiting for Pod e2e-tests-statefulset-qf244/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 11:40:26.650: INFO: Waiting for StatefulSet e2e-tests-statefulset-qf244/ss2 to complete update Jan 22 11:40:26.650: INFO: Waiting for Pod e2e-tests-statefulset-qf244/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 22 11:40:36.617: INFO: Waiting for StatefulSet e2e-tests-statefulset-qf244/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 22 11:40:46.665: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qf244 Jan 22 11:40:46.688: INFO: Scaling statefulset ss2 to 0 Jan 22 11:41:06.920: INFO: Waiting for statefulset status.replicas updated to 0 Jan 22 11:41:06.929: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:41:06.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-qf244" for this suite. Jan 22 11:41:15.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:41:15.249: INFO: namespace: e2e-tests-statefulset-qf244, resource: bindings, ignored listing per whitelist Jan 22 11:41:15.466: INFO: namespace e2e-tests-statefulset-qf244 deletion completed in 8.488355603s • [SLOW TEST:161.528 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:41:15.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-18fae1a7-3d0c-11ea-ad91-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 22 11:41:15.681: INFO: Waiting up to 5m0s for pod "pod-configmaps-18fbd18e-3d0c-11ea-ad91-0242ac110005" in namespace "e2e-tests-configmap-xtj9p" to be "success or failure" Jan 22 11:41:15.715: INFO: Pod "pod-configmaps-18fbd18e-3d0c-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.669431ms Jan 22 11:41:17.837: INFO: Pod "pod-configmaps-18fbd18e-3d0c-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155970251s Jan 22 11:41:19.853: INFO: Pod "pod-configmaps-18fbd18e-3d0c-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172361473s Jan 22 11:41:22.255: INFO: Pod "pod-configmaps-18fbd18e-3d0c-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.574150255s Jan 22 11:41:24.266: INFO: Pod "pod-configmaps-18fbd18e-3d0c-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.584798035s Jan 22 11:41:26.452: INFO: Pod "pod-configmaps-18fbd18e-3d0c-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.770985356s STEP: Saw pod success Jan 22 11:41:26.452: INFO: Pod "pod-configmaps-18fbd18e-3d0c-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:41:26.774: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-18fbd18e-3d0c-11ea-ad91-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 22 11:41:27.118: INFO: Waiting for pod pod-configmaps-18fbd18e-3d0c-11ea-ad91-0242ac110005 to disappear Jan 22 11:41:27.188: INFO: Pod pod-configmaps-18fbd18e-3d0c-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:41:27.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xtj9p" for this suite. Jan 22 11:41:33.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:41:33.379: INFO: namespace: e2e-tests-configmap-xtj9p, resource: bindings, ignored listing per whitelist Jan 22 11:41:33.409: INFO: namespace e2e-tests-configmap-xtj9p deletion completed in 6.210076491s • [SLOW TEST:17.943 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:41:33.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9nnlk STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 22 11:41:33.556: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 22 11:42:11.892: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-9nnlk PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 22 11:42:11.893: INFO: >>> kubeConfig: /root/.kube/config I0122 11:42:12.007068 8 log.go:172] (0xc0020d8420) (0xc001ade000) Create stream I0122 11:42:12.007165 8 log.go:172] (0xc0020d8420) (0xc001ade000) Stream added, broadcasting: 1 I0122 11:42:12.014558 8 log.go:172] (0xc0020d8420) Reply frame received for 1 I0122 11:42:12.014629 8 log.go:172] (0xc0020d8420) (0xc000ed5ea0) Create stream I0122 11:42:12.014647 8 log.go:172] (0xc0020d8420) (0xc000ed5ea0) Stream added, broadcasting: 3 I0122 11:42:12.016938 8 log.go:172] (0xc0020d8420) Reply frame received for 3 I0122 11:42:12.017024 8 log.go:172] (0xc0020d8420) (0xc0012c9680) Create stream I0122 11:42:12.017046 8 log.go:172] (0xc0020d8420) (0xc0012c9680) Stream added, broadcasting: 5 I0122 11:42:12.018471 8 log.go:172] (0xc0020d8420) Reply frame received for 5 I0122 11:42:12.271049 8 log.go:172] (0xc0020d8420) Data frame received for 3 I0122 11:42:12.271116 8 log.go:172] (0xc000ed5ea0) (3) Data frame handling I0122 11:42:12.271182 8 log.go:172] (0xc000ed5ea0) (3) Data frame sent I0122 11:42:12.430376 8 log.go:172] (0xc0020d8420) Data frame received for 1 I0122 11:42:12.430679 8 log.go:172] (0xc001ade000) (1) Data frame handling I0122 11:42:12.430721 8 log.go:172] (0xc001ade000) (1) Data frame sent I0122 11:42:12.432889 8 log.go:172] (0xc0020d8420) (0xc001ade000) Stream removed, broadcasting: 1 I0122 11:42:12.433061 8 log.go:172] (0xc0020d8420) (0xc0012c9680) Stream removed, broadcasting: 5 I0122 11:42:12.433139 8 log.go:172] (0xc0020d8420) (0xc000ed5ea0) Stream removed, broadcasting: 3 I0122 11:42:12.433199 8 log.go:172] (0xc0020d8420) Go away received I0122 11:42:12.433264 8 log.go:172] (0xc0020d8420) (0xc001ade000) Stream removed, broadcasting: 1 I0122 11:42:12.433291 8 log.go:172] (0xc0020d8420) (0xc000ed5ea0) Stream removed, broadcasting: 3 I0122 11:42:12.433400 8 log.go:172] (0xc0020d8420) (0xc0012c9680) Stream removed, broadcasting: 5 Jan 22 11:42:12.433: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:42:12.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-9nnlk" for this suite. Jan 22 11:42:36.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:42:36.664: INFO: namespace: e2e-tests-pod-network-test-9nnlk, resource: bindings, ignored listing per whitelist Jan 22 11:42:36.713: INFO: namespace e2e-tests-pod-network-test-9nnlk deletion completed in 24.244244541s • [SLOW TEST:63.304 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:42:36.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 22 11:42:36.917: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 22 11:42:36.962: INFO: Number of nodes with available pods: 0 Jan 22 11:42:36.962: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:42:37.988: INFO: Number of nodes with available pods: 0 Jan 22 11:42:37.988: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:42:38.988: INFO: Number of nodes with available pods: 0 Jan 22 11:42:38.988: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:42:39.984: INFO: Number of nodes with available pods: 0 Jan 22 11:42:39.984: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:42:40.994: INFO: Number of nodes with available pods: 0 Jan 22 11:42:40.995: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:42:43.151: INFO: Number of nodes with available pods: 0 Jan 22 11:42:43.152: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:42:44.172: INFO: Number of nodes with available pods: 0 Jan 22 11:42:44.172: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:42:44.988: INFO: Number of nodes with available pods: 0 Jan 22 11:42:44.988: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:42:46.008: INFO: Number of nodes with available pods: 0 Jan 22 11:42:46.008: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:42:46.993: INFO: Number of nodes with available pods: 1 Jan 22 11:42:46.993: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 22 11:42:47.103: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:42:48.132: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:42:49.133: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:42:50.134: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:42:51.287: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:42:52.131: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:42:53.127: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:42:53.127: INFO: Pod daemon-set-dbssx is not available Jan 22 11:42:54.124: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:42:54.124: INFO: Pod daemon-set-dbssx is not available Jan 22 11:42:55.130: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:42:55.130: INFO: Pod daemon-set-dbssx is not available Jan 22 11:42:56.125: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:42:56.125: INFO: Pod daemon-set-dbssx is not available Jan 22 11:42:57.123: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:42:57.123: INFO: Pod daemon-set-dbssx is not available Jan 22 11:42:58.141: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:42:58.141: INFO: Pod daemon-set-dbssx is not available Jan 22 11:42:59.123: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:42:59.123: INFO: Pod daemon-set-dbssx is not available Jan 22 11:43:00.141: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:43:00.141: INFO: Pod daemon-set-dbssx is not available Jan 22 11:43:01.125: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:43:01.125: INFO: Pod daemon-set-dbssx is not available Jan 22 11:43:02.136: INFO: Wrong image for pod: daemon-set-dbssx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 22 11:43:02.136: INFO: Pod daemon-set-dbssx is not available Jan 22 11:43:03.146: INFO: Pod daemon-set-nbp6s is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 22 11:43:03.167: INFO: Number of nodes with available pods: 0 Jan 22 11:43:03.167: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:43:04.443: INFO: Number of nodes with available pods: 0 Jan 22 11:43:04.443: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:43:05.440: INFO: Number of nodes with available pods: 0 Jan 22 11:43:05.440: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:43:06.189: INFO: Number of nodes with available pods: 0 Jan 22 11:43:06.189: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:43:07.218: INFO: Number of nodes with available pods: 0 Jan 22 11:43:07.218: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:43:08.326: INFO: Number of nodes with available pods: 0 Jan 22 11:43:08.326: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:43:09.811: INFO: Number of nodes with available pods: 0 Jan 22 11:43:09.811: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:43:10.198: INFO: Number of nodes with available pods: 0 Jan 22 11:43:10.198: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:43:11.185: INFO: Number of nodes with available pods: 0 Jan 22 11:43:11.185: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:43:12.199: INFO: Number of nodes with available pods: 1 Jan 22 11:43:12.199: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4qzht, will wait for the garbage collector to delete the pods Jan 22 11:43:12.293: INFO: Deleting DaemonSet.extensions daemon-set took: 16.615377ms Jan 22 11:43:12.393: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.342563ms Jan 22 11:43:19.308: INFO: Number of nodes with available pods: 0 Jan 22 11:43:19.308: INFO: Number of running nodes: 0, number of available pods: 0 Jan 22 11:43:19.313: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4qzht/daemonsets","resourceVersion":"19071093"},"items":null} Jan 22 11:43:19.317: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4qzht/pods","resourceVersion":"19071093"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:43:19.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4qzht" for this suite. Jan 22 11:43:25.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:43:25.654: INFO: namespace: e2e-tests-daemonsets-4qzht, resource: bindings, ignored listing per whitelist Jan 22 11:43:25.729: INFO: namespace e2e-tests-daemonsets-4qzht deletion completed in 6.391521729s • [SLOW TEST:49.015 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:43:25.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-66a7eeeb-3d0c-11ea-ad91-0242ac110005 STEP: Creating a pod to test consume secrets Jan 22 11:43:25.973: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-66a8c20c-3d0c-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-ffhcv" to be "success or failure" Jan 22 11:43:26.072: INFO: Pod "pod-projected-secrets-66a8c20c-3d0c-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 98.607067ms Jan 22 11:43:28.406: INFO: Pod "pod-projected-secrets-66a8c20c-3d0c-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.432579254s Jan 22 11:43:30.974: INFO: Pod "pod-projected-secrets-66a8c20c-3d0c-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.000622099s Jan 22 11:43:33.007: INFO: Pod "pod-projected-secrets-66a8c20c-3d0c-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.033887774s Jan 22 11:43:35.032: INFO: Pod "pod-projected-secrets-66a8c20c-3d0c-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.059000459s STEP: Saw pod success Jan 22 11:43:35.032: INFO: Pod "pod-projected-secrets-66a8c20c-3d0c-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:43:35.046: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-66a8c20c-3d0c-11ea-ad91-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 22 11:43:35.209: INFO: Waiting for pod pod-projected-secrets-66a8c20c-3d0c-11ea-ad91-0242ac110005 to disappear Jan 22 11:43:35.267: INFO: Pod pod-projected-secrets-66a8c20c-3d0c-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:43:35.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ffhcv" for this suite. Jan 22 11:43:41.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:43:41.453: INFO: namespace: e2e-tests-projected-ffhcv, resource: bindings, ignored listing per whitelist Jan 22 11:43:41.513: INFO: namespace e2e-tests-projected-ffhcv deletion completed in 6.225433637s • [SLOW TEST:15.783 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:43:41.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-5pbhn [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-5pbhn STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-5pbhn Jan 22 11:43:41.728: INFO: Found 0 stateful pods, waiting for 1 Jan 22 11:43:51.743: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 22 11:43:51.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 11:43:52.558: INFO: stderr: "I0122 11:43:51.960789 1191 log.go:172] (0xc000138630) (0xc00077a5a0) Create stream\nI0122 11:43:51.961737 1191 log.go:172] (0xc000138630) (0xc00077a5a0) Stream added, broadcasting: 1\nI0122 11:43:51.968490 1191 log.go:172] (0xc000138630) Reply frame received for 1\nI0122 11:43:51.968617 1191 log.go:172] (0xc000138630) (0xc000652be0) Create stream\nI0122 11:43:51.968641 1191 log.go:172] (0xc000138630) (0xc000652be0) Stream added, broadcasting: 3\nI0122 11:43:51.969926 1191 log.go:172] (0xc000138630) Reply frame received for 3\nI0122 11:43:51.970002 1191 log.go:172] (0xc000138630) (0xc000360000) Create stream\nI0122 11:43:51.970012 1191 log.go:172] (0xc000138630) (0xc000360000) Stream added, broadcasting: 5\nI0122 11:43:51.971046 1191 log.go:172] (0xc000138630) Reply frame received for 5\nI0122 11:43:52.263416 1191 log.go:172] (0xc000138630) Data frame received for 3\nI0122 11:43:52.263529 1191 log.go:172] (0xc000652be0) (3) Data frame handling\nI0122 11:43:52.263553 1191 log.go:172] (0xc000652be0) (3) Data frame sent\nI0122 11:43:52.516661 1191 log.go:172] (0xc000138630) Data frame received for 1\nI0122 11:43:52.516863 1191 log.go:172] (0xc00077a5a0) (1) Data frame handling\nI0122 11:43:52.516922 1191 log.go:172] (0xc00077a5a0) (1) Data frame sent\nI0122 11:43:52.516949 1191 log.go:172] (0xc000138630) (0xc00077a5a0) Stream removed, broadcasting: 1\nI0122 11:43:52.532009 1191 log.go:172] (0xc000138630) (0xc000652be0) Stream removed, broadcasting: 3\nI0122 11:43:52.532409 1191 log.go:172] (0xc000138630) (0xc000360000) Stream removed, broadcasting: 5\nI0122 11:43:52.532470 1191 log.go:172] (0xc000138630) Go away received\nI0122 11:43:52.532777 1191 log.go:172] (0xc000138630) (0xc00077a5a0) Stream removed, broadcasting: 1\nI0122 11:43:52.532970 1191 log.go:172] (0xc000138630) (0xc000652be0) Stream removed, broadcasting: 3\nI0122 11:43:52.533019 1191 log.go:172] (0xc000138630) (0xc000360000) Stream removed, broadcasting: 5\n" Jan 22 11:43:52.558: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 11:43:52.558: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 22 11:43:52.682: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 22 11:43:52.682: INFO: Waiting for statefulset status.replicas updated to 0 Jan 22 11:43:52.704: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 22 11:44:02.749: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 11:44:02.749: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC }] Jan 22 11:44:02.749: INFO: Jan 22 11:44:02.749: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 22 11:44:04.232: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.979896961s Jan 22 11:44:05.816: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.497373337s Jan 22 11:44:06.844: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.912430045s Jan 22 11:44:07.918: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.884671449s Jan 22 11:44:08.942: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.811179235s Jan 22 11:44:10.015: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.786889992s Jan 22 11:44:11.068: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.713594636s Jan 22 11:44:12.097: INFO: Verifying statefulset ss doesn't scale past 3 for another 660.719786ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-5pbhn Jan 22 11:44:13.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:44:14.314: INFO: stderr: "I0122 11:44:13.515466 1212 log.go:172] (0xc0006ec370) (0xc0006295e0) Create stream\nI0122 11:44:13.515826 1212 log.go:172] (0xc0006ec370) (0xc0006295e0) Stream added, broadcasting: 1\nI0122 11:44:13.523013 1212 log.go:172] (0xc0006ec370) Reply frame received for 1\nI0122 11:44:13.523049 1212 log.go:172] (0xc0006ec370) (0xc000720000) Create stream\nI0122 11:44:13.523059 1212 log.go:172] (0xc0006ec370) (0xc000720000) Stream added, broadcasting: 3\nI0122 11:44:13.526433 1212 log.go:172] (0xc0006ec370) Reply frame received for 3\nI0122 11:44:13.526460 1212 log.go:172] (0xc0006ec370) (0xc0007a2000) Create stream\nI0122 11:44:13.526469 1212 log.go:172] (0xc0006ec370) (0xc0007a2000) Stream added, broadcasting: 5\nI0122 11:44:13.527783 1212 log.go:172] (0xc0006ec370) Reply frame received for 5\nI0122 11:44:14.006235 1212 log.go:172] (0xc0006ec370) Data frame received for 3\nI0122 11:44:14.006319 1212 log.go:172] (0xc000720000) (3) Data frame handling\nI0122 11:44:14.006343 1212 log.go:172] (0xc000720000) (3) Data frame sent\nI0122 11:44:14.302821 1212 log.go:172] (0xc0006ec370) Data frame received for 1\nI0122 11:44:14.302998 1212 log.go:172] (0xc0006ec370) (0xc0007a2000) Stream removed, broadcasting: 5\nI0122 11:44:14.303067 1212 log.go:172] (0xc0006295e0) (1) Data frame handling\nI0122 11:44:14.303090 1212 log.go:172] (0xc0006295e0) (1) Data frame sent\nI0122 11:44:14.303181 1212 log.go:172] (0xc0006ec370) (0xc000720000) Stream removed, broadcasting: 3\nI0122 11:44:14.303212 1212 log.go:172] (0xc0006ec370) (0xc0006295e0) Stream removed, broadcasting: 1\nI0122 11:44:14.303235 1212 log.go:172] (0xc0006ec370) Go away received\nI0122 11:44:14.304077 1212 log.go:172] (0xc0006ec370) (0xc0006295e0) Stream removed, broadcasting: 1\nI0122 11:44:14.304092 1212 log.go:172] (0xc0006ec370) (0xc000720000) Stream removed, broadcasting: 3\nI0122 11:44:14.304098 1212 log.go:172] (0xc0006ec370) (0xc0007a2000) Stream removed, broadcasting: 5\n" Jan 22 11:44:14.314: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 22 11:44:14.314: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 22 11:44:14.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:44:15.077: INFO: stderr: "I0122 11:44:14.760548 1234 log.go:172] (0xc0008402c0) (0xc0007485a0) Create stream\nI0122 11:44:14.760984 1234 log.go:172] (0xc0008402c0) (0xc0007485a0) Stream added, broadcasting: 1\nI0122 11:44:14.778521 1234 log.go:172] (0xc0008402c0) Reply frame received for 1\nI0122 11:44:14.778702 1234 log.go:172] (0xc0008402c0) (0xc00030ec80) Create stream\nI0122 11:44:14.778722 1234 log.go:172] (0xc0008402c0) (0xc00030ec80) Stream added, broadcasting: 3\nI0122 11:44:14.779868 1234 log.go:172] (0xc0008402c0) Reply frame received for 3\nI0122 11:44:14.779894 1234 log.go:172] (0xc0008402c0) (0xc0003ae000) Create stream\nI0122 11:44:14.779907 1234 log.go:172] (0xc0008402c0) (0xc0003ae000) Stream added, broadcasting: 5\nI0122 11:44:14.784202 1234 log.go:172] (0xc0008402c0) Reply frame received for 5\nI0122 11:44:14.955303 1234 log.go:172] (0xc0008402c0) Data frame received for 3\nI0122 11:44:14.955446 1234 log.go:172] (0xc00030ec80) (3) Data frame handling\nI0122 11:44:14.955470 1234 log.go:172] (0xc00030ec80) (3) Data frame sent\nI0122 11:44:14.955521 1234 log.go:172] (0xc0008402c0) Data frame received for 5\nI0122 11:44:14.955537 1234 log.go:172] (0xc0003ae000) (5) Data frame handling\nI0122 11:44:14.955557 1234 log.go:172] (0xc0003ae000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0122 11:44:15.070387 1234 log.go:172] (0xc0008402c0) (0xc0003ae000) Stream removed, broadcasting: 5\nI0122 11:44:15.070590 1234 log.go:172] (0xc0008402c0) Data frame received for 1\nI0122 11:44:15.070612 1234 log.go:172] (0xc0007485a0) (1) Data frame handling\nI0122 11:44:15.070626 1234 log.go:172] (0xc0007485a0) (1) Data frame sent\nI0122 11:44:15.070850 1234 log.go:172] (0xc0008402c0) (0xc0007485a0) Stream removed, broadcasting: 1\nI0122 11:44:15.071024 1234 log.go:172] (0xc0008402c0) (0xc00030ec80) Stream removed, broadcasting: 3\nI0122 11:44:15.071069 1234 log.go:172] (0xc0008402c0) Go away received\nI0122 11:44:15.071754 1234 log.go:172] (0xc0008402c0) (0xc0007485a0) Stream removed, broadcasting: 1\nI0122 11:44:15.071770 1234 log.go:172] (0xc0008402c0) (0xc00030ec80) Stream removed, broadcasting: 3\nI0122 11:44:15.071781 1234 log.go:172] (0xc0008402c0) (0xc0003ae000) Stream removed, broadcasting: 5\n" Jan 22 11:44:15.077: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 22 11:44:15.077: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 22 11:44:15.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:44:15.454: INFO: stderr: "I0122 11:44:15.232652 1256 log.go:172] (0xc000720370) (0xc0007465a0) Create stream\nI0122 11:44:15.232823 1256 log.go:172] (0xc000720370) (0xc0007465a0) Stream added, broadcasting: 1\nI0122 11:44:15.235982 1256 log.go:172] (0xc000720370) Reply frame received for 1\nI0122 11:44:15.236011 1256 log.go:172] (0xc000720370) (0xc0005b8c80) Create stream\nI0122 11:44:15.236016 1256 log.go:172] (0xc000720370) (0xc0005b8c80) Stream added, broadcasting: 3\nI0122 11:44:15.236772 1256 log.go:172] (0xc000720370) Reply frame received for 3\nI0122 11:44:15.236794 1256 log.go:172] (0xc000720370) (0xc0006a4000) Create stream\nI0122 11:44:15.236800 1256 log.go:172] (0xc000720370) (0xc0006a4000) Stream added, broadcasting: 5\nI0122 11:44:15.237668 1256 log.go:172] (0xc000720370) Reply frame received for 5\nI0122 11:44:15.325541 1256 log.go:172] (0xc000720370) Data frame received for 5\nI0122 11:44:15.325611 1256 log.go:172] (0xc0006a4000) (5) Data frame handling\nI0122 11:44:15.325622 1256 log.go:172] (0xc0006a4000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0122 11:44:15.325653 1256 log.go:172] (0xc000720370) Data frame received for 3\nI0122 11:44:15.325702 1256 log.go:172] (0xc0005b8c80) (3) Data frame handling\nI0122 11:44:15.325720 1256 log.go:172] (0xc0005b8c80) (3) Data frame sent\nI0122 11:44:15.447373 1256 log.go:172] (0xc000720370) Data frame received for 1\nI0122 11:44:15.447505 1256 log.go:172] (0xc000720370) (0xc0005b8c80) Stream removed, broadcasting: 3\nI0122 11:44:15.447581 1256 log.go:172] (0xc0007465a0) (1) Data frame handling\nI0122 11:44:15.447593 1256 log.go:172] (0xc0007465a0) (1) Data frame sent\nI0122 11:44:15.447598 1256 log.go:172] (0xc000720370) (0xc0007465a0) Stream removed, broadcasting: 1\nI0122 11:44:15.447802 1256 log.go:172] (0xc000720370) (0xc0006a4000) Stream removed, broadcasting: 5\nI0122 11:44:15.447836 1256 log.go:172] (0xc000720370) Go away received\nI0122 11:44:15.447976 1256 log.go:172] (0xc000720370) (0xc0007465a0) Stream removed, broadcasting: 1\nI0122 11:44:15.447986 1256 log.go:172] (0xc000720370) (0xc0005b8c80) Stream removed, broadcasting: 3\nI0122 11:44:15.447995 1256 log.go:172] (0xc000720370) (0xc0006a4000) Stream removed, broadcasting: 5\n" Jan 22 11:44:15.455: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 22 11:44:15.455: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 22 11:44:15.476: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 11:44:15.476: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Pending - Ready=false Jan 22 11:44:25.494: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 22 11:44:25.494: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 22 11:44:25.494: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 22 11:44:25.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 11:44:26.106: INFO: stderr: "I0122 11:44:25.800071 1278 log.go:172] (0xc00014c6e0) (0xc000728640) Create stream\nI0122 11:44:25.800263 1278 log.go:172] (0xc00014c6e0) (0xc000728640) Stream added, broadcasting: 1\nI0122 11:44:25.809030 1278 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0122 11:44:25.809314 1278 log.go:172] (0xc00014c6e0) (0xc00059ec80) Create stream\nI0122 11:44:25.809362 1278 log.go:172] (0xc00014c6e0) (0xc00059ec80) Stream added, broadcasting: 3\nI0122 11:44:25.811472 1278 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0122 11:44:25.811540 1278 log.go:172] (0xc00014c6e0) (0xc000504000) Create stream\nI0122 11:44:25.811552 1278 log.go:172] (0xc00014c6e0) (0xc000504000) Stream added, broadcasting: 5\nI0122 11:44:25.813975 1278 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0122 11:44:25.941812 1278 log.go:172] (0xc00014c6e0) Data frame received for 3\nI0122 11:44:25.941932 1278 log.go:172] (0xc00059ec80) (3) Data frame handling\nI0122 11:44:25.941961 1278 log.go:172] (0xc00059ec80) (3) Data frame sent\nI0122 11:44:26.093321 1278 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0122 11:44:26.093515 1278 log.go:172] (0xc00014c6e0) (0xc000504000) Stream removed, broadcasting: 5\nI0122 11:44:26.093570 1278 log.go:172] (0xc000728640) (1) Data frame handling\nI0122 11:44:26.093588 1278 log.go:172] (0xc000728640) (1) Data frame sent\nI0122 11:44:26.093699 1278 log.go:172] (0xc00014c6e0) (0xc00059ec80) Stream removed, broadcasting: 3\nI0122 11:44:26.093764 1278 log.go:172] (0xc00014c6e0) (0xc000728640) Stream removed, broadcasting: 1\nI0122 11:44:26.093801 1278 log.go:172] (0xc00014c6e0) Go away received\nI0122 11:44:26.094285 1278 log.go:172] (0xc00014c6e0) (0xc000728640) Stream removed, broadcasting: 1\nI0122 11:44:26.094307 1278 log.go:172] (0xc00014c6e0) (0xc00059ec80) Stream removed, broadcasting: 3\nI0122 11:44:26.094324 1278 log.go:172] (0xc00014c6e0) (0xc000504000) Stream removed, broadcasting: 5\n" Jan 22 11:44:26.106: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 11:44:26.107: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 22 11:44:26.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 11:44:26.974: INFO: stderr: "I0122 11:44:26.317712 1301 log.go:172] (0xc00072e370) (0xc000798640) Create stream\nI0122 11:44:26.318005 1301 log.go:172] (0xc00072e370) (0xc000798640) Stream added, broadcasting: 1\nI0122 11:44:26.323192 1301 log.go:172] (0xc00072e370) Reply frame received for 1\nI0122 11:44:26.323230 1301 log.go:172] (0xc00072e370) (0xc0007986e0) Create stream\nI0122 11:44:26.323238 1301 log.go:172] (0xc00072e370) (0xc0007986e0) Stream added, broadcasting: 3\nI0122 11:44:26.324215 1301 log.go:172] (0xc00072e370) Reply frame received for 3\nI0122 11:44:26.324243 1301 log.go:172] (0xc00072e370) (0xc00067ad20) Create stream\nI0122 11:44:26.324252 1301 log.go:172] (0xc00072e370) (0xc00067ad20) Stream added, broadcasting: 5\nI0122 11:44:26.325047 1301 log.go:172] (0xc00072e370) Reply frame received for 5\nI0122 11:44:26.644137 1301 log.go:172] (0xc00072e370) Data frame received for 3\nI0122 11:44:26.644374 1301 log.go:172] (0xc0007986e0) (3) Data frame handling\nI0122 11:44:26.644423 1301 log.go:172] (0xc0007986e0) (3) Data frame sent\nI0122 11:44:26.964881 1301 log.go:172] (0xc00072e370) (0xc00067ad20) Stream removed, broadcasting: 5\nI0122 11:44:26.965095 1301 log.go:172] (0xc00072e370) Data frame received for 1\nI0122 11:44:26.965117 1301 log.go:172] (0xc000798640) (1) Data frame handling\nI0122 11:44:26.965138 1301 log.go:172] (0xc000798640) (1) Data frame sent\nI0122 11:44:26.965171 1301 log.go:172] (0xc00072e370) (0xc0007986e0) Stream removed, broadcasting: 3\nI0122 11:44:26.965210 1301 log.go:172] (0xc00072e370) (0xc000798640) Stream removed, broadcasting: 1\nI0122 11:44:26.965258 1301 log.go:172] (0xc00072e370) Go away received\nI0122 11:44:26.965845 1301 log.go:172] (0xc00072e370) (0xc000798640) Stream removed, broadcasting: 1\nI0122 11:44:26.965876 1301 log.go:172] (0xc00072e370) (0xc0007986e0) Stream removed, broadcasting: 3\nI0122 11:44:26.965895 1301 log.go:172] (0xc00072e370) (0xc00067ad20) Stream removed, broadcasting: 5\n" Jan 22 11:44:26.974: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 11:44:26.974: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 22 11:44:26.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 22 11:44:27.480: INFO: stderr: "I0122 11:44:27.154654 1323 log.go:172] (0xc00076e2c0) (0xc0005d25a0) Create stream\nI0122 11:44:27.154911 1323 log.go:172] (0xc00076e2c0) (0xc0005d25a0) Stream added, broadcasting: 1\nI0122 11:44:27.159408 1323 log.go:172] (0xc00076e2c0) Reply frame received for 1\nI0122 11:44:27.159434 1323 log.go:172] (0xc00076e2c0) (0xc0006e4000) Create stream\nI0122 11:44:27.159442 1323 log.go:172] (0xc00076e2c0) (0xc0006e4000) Stream added, broadcasting: 3\nI0122 11:44:27.160463 1323 log.go:172] (0xc00076e2c0) Reply frame received for 3\nI0122 11:44:27.160504 1323 log.go:172] (0xc00076e2c0) (0xc0006e4140) Create stream\nI0122 11:44:27.160520 1323 log.go:172] (0xc00076e2c0) (0xc0006e4140) Stream added, broadcasting: 5\nI0122 11:44:27.161760 1323 log.go:172] (0xc00076e2c0) Reply frame received for 5\nI0122 11:44:27.297648 1323 log.go:172] (0xc00076e2c0) Data frame received for 3\nI0122 11:44:27.297738 1323 log.go:172] (0xc0006e4000) (3) Data frame handling\nI0122 11:44:27.297765 1323 log.go:172] (0xc0006e4000) (3) Data frame sent\nI0122 11:44:27.473031 1323 log.go:172] (0xc00076e2c0) Data frame received for 1\nI0122 11:44:27.473115 1323 log.go:172] (0xc00076e2c0) (0xc0006e4000) Stream removed, broadcasting: 3\nI0122 11:44:27.473152 1323 log.go:172] (0xc0005d25a0) (1) Data frame handling\nI0122 11:44:27.473166 1323 log.go:172] (0xc0005d25a0) (1) Data frame sent\nI0122 11:44:27.473207 1323 log.go:172] (0xc00076e2c0) (0xc0006e4140) Stream removed, broadcasting: 5\nI0122 11:44:27.473228 1323 log.go:172] (0xc00076e2c0) (0xc0005d25a0) Stream removed, broadcasting: 1\nI0122 11:44:27.473264 1323 log.go:172] (0xc00076e2c0) Go away received\nI0122 11:44:27.473911 1323 log.go:172] (0xc00076e2c0) (0xc0005d25a0) Stream removed, broadcasting: 1\nI0122 11:44:27.473933 1323 log.go:172] (0xc00076e2c0) (0xc0006e4000) Stream removed, broadcasting: 3\nI0122 11:44:27.473945 1323 log.go:172] (0xc00076e2c0) (0xc0006e4140) Stream removed, broadcasting: 5\n" Jan 22 11:44:27.480: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 22 11:44:27.480: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 22 11:44:27.480: INFO: Waiting for statefulset status.replicas updated to 0 Jan 22 11:44:27.513: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 22 11:44:27.513: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 22 11:44:27.513: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 22 11:44:27.534: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 11:44:27.534: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC }] Jan 22 11:44:27.534: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC }] Jan 22 11:44:27.534: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC }] Jan 22 11:44:27.535: INFO: Jan 22 11:44:27.535: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 22 11:44:29.750: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 11:44:29.750: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC }] Jan 22 11:44:29.750: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC }] Jan 22 11:44:29.750: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC }] Jan 22 11:44:29.750: INFO: Jan 22 11:44:29.750: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 22 11:44:30.793: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 11:44:30.793: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC }] Jan 22 11:44:30.793: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC }] Jan 22 11:44:30.793: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC }] Jan 22 11:44:30.794: INFO: Jan 22 11:44:30.794: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 22 11:44:32.983: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 11:44:32.984: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC }] Jan 22 11:44:32.984: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC }] Jan 22 11:44:32.984: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC }] Jan 22 11:44:32.984: INFO: Jan 22 11:44:32.984: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 22 11:44:34.073: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 11:44:34.073: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC }] Jan 22 11:44:34.074: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC }] Jan 22 11:44:34.074: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC }] Jan 22 11:44:34.074: INFO: Jan 22 11:44:34.074: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 22 11:44:35.082: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 11:44:35.082: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC }] Jan 22 11:44:35.082: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC }] Jan 22 11:44:35.082: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC }] Jan 22 11:44:35.082: INFO: Jan 22 11:44:35.082: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 22 11:44:36.168: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 11:44:36.168: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:43:41 +0000 UTC }] Jan 22 11:44:36.168: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC }] Jan 22 11:44:36.168: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC }] Jan 22 11:44:36.168: INFO: Jan 22 11:44:36.168: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 22 11:44:37.176: INFO: POD NODE PHASE GRACE CONDITIONS Jan 22 11:44:37.176: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 11:44:02 +0000 UTC }] Jan 22 11:44:37.176: INFO: Jan 22 11:44:37.176: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-5pbhn Jan 22 11:44:38.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:44:38.742: INFO: rc: 1 Jan 22 11:44:38.743: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0019fecf0 exit status 1 true [0xc000af61d8 0xc000af61f0 0xc000af6208] [0xc000af61d8 0xc000af61f0 0xc000af6208] [0xc000af61e8 0xc000af6200] [0x935700 0x935700] 0xc0018ade00 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 22 11:44:48.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:44:48.918: INFO: rc: 1 Jan 22 11:44:48.918: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0022ab080 exit status 1 true [0xc001806908 0xc001806920 0xc001806938] [0xc001806908 0xc001806920 0xc001806938] [0xc001806918 0xc001806930] [0x935700 0x935700] 0xc0019cb500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:44:58.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:44:59.067: INFO: rc: 1 Jan 22 11:44:59.067: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0019fee10 exit status 1 true [0xc000af6210 0xc000af6228 0xc000af6240] [0xc000af6210 0xc000af6228 0xc000af6240] [0xc000af6220 0xc000af6238] [0x935700 0x935700] 0xc0021101e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:45:09.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:45:09.234: INFO: rc: 1 Jan 22 11:45:09.234: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00224ab10 exit status 1 true [0xc001a9e638 0xc001a9e650 0xc001a9e668] [0xc001a9e638 0xc001a9e650 0xc001a9e668] [0xc001a9e648 0xc001a9e660] [0x935700 0x935700] 0xc0010a8540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:45:19.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:45:19.400: INFO: rc: 1 Jan 22 11:45:19.400: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0022ab1d0 exit status 1 true [0xc001806940 0xc001806958 0xc001806970] [0xc001806940 0xc001806958 0xc001806970] [0xc001806950 0xc001806968] [0x935700 0x935700] 0xc0019cb860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:45:29.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:45:29.560: INFO: rc: 1 Jan 22 11:45:29.561: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00224ac90 exit status 1 true [0xc001a9e670 0xc001a9e688 0xc001a9e6a0] [0xc001a9e670 0xc001a9e688 0xc001a9e6a0] [0xc001a9e680 0xc001a9e698] [0x935700 0x935700] 0xc0010a8840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:45:39.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:45:39.734: INFO: rc: 1 Jan 22 11:45:39.735: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00220c2d0 exit status 1 true [0xc00000ebe8 0xc00000ec80 0xc00000ed28] [0xc00000ebe8 0xc00000ec80 0xc00000ed28] [0xc00000ec50 0xc00000ecd8] [0x935700 0x935700] 0xc0018ac8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:45:49.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:45:50.054: INFO: rc: 1 Jan 22 11:45:50.055: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00220c450 exit status 1 true [0xc00000ed48 0xc00000ed68 0xc00000ede8] [0xc00000ed48 0xc00000ed68 0xc00000ede8] [0xc00000ed60 0xc00000ede0] [0x935700 0x935700] 0xc0018adec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:46:00.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:46:00.374: INFO: rc: 1 Jan 22 11:46:00.375: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b84390 exit status 1 true [0xc0000ee0f0 0xc001a9e000 0xc001a9e018] [0xc0000ee0f0 0xc001a9e000 0xc001a9e018] [0xc0000ee238 0xc001a9e010] [0x935700 0x935700] 0xc000f823c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:46:10.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:46:10.512: INFO: rc: 1 Jan 22 11:46:10.512: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00220c5a0 exit status 1 true [0xc00000edf8 0xc00000eeb8 0xc00000ef28] [0xc00000edf8 0xc00000eeb8 0xc00000ef28] [0xc00000ee58 0xc00000ef00] [0x935700 0x935700] 0xc00176a900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:46:20.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:46:20.640: INFO: rc: 1 Jan 22 11:46:20.641: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00220c6f0 exit status 1 true [0xc00000ef98 0xc00000f040 0xc00000f0f8] [0xc00000ef98 0xc00000f040 0xc00000f0f8] [0xc00000f018 0xc00000f0c8] [0x935700 0x935700] 0xc00176aba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:46:30.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:46:30.875: INFO: rc: 1 Jan 22 11:46:30.876: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00220c810 exit status 1 true [0xc00000f138 0xc00000f188 0xc00000f1a0] [0xc00000f138 0xc00000f188 0xc00000f1a0] [0xc00000f180 0xc00000f198] [0x935700 0x935700] 0xc00176b140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:46:40.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:46:41.046: INFO: rc: 1 Jan 22 11:46:41.046: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001e66120 exit status 1 true [0xc00041e130 0xc00041e2a8 0xc00041e308] [0xc00041e130 0xc00041e2a8 0xc00041e308] [0xc00041e200 0xc00041e2e0] [0x935700 0x935700] 0xc0014ba240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:46:51.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:46:51.168: INFO: rc: 1 Jan 22 11:46:51.168: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00220c990 exit status 1 true [0xc00000f1c0 0xc00000f268 0xc00000f300] [0xc00000f1c0 0xc00000f268 0xc00000f300] [0xc00000f220 0xc00000f2c8] [0x935700 0x935700] 0xc00176bce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:47:01.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:47:01.350: INFO: rc: 1 Jan 22 11:47:01.351: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b84510 exit status 1 true [0xc001a9e020 0xc001a9e038 0xc001a9e050] [0xc001a9e020 0xc001a9e038 0xc001a9e050] [0xc001a9e030 0xc001a9e048] [0x935700 0x935700] 0xc000f826c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:47:11.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:47:11.507: INFO: rc: 1 Jan 22 11:47:11.507: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b84630 exit status 1 true [0xc001a9e058 0xc001a9e070 0xc001a9e088] [0xc001a9e058 0xc001a9e070 0xc001a9e088] [0xc001a9e068 0xc001a9e080] [0x935700 0x935700] 0xc000f82960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:47:21.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:47:21.645: INFO: rc: 1 Jan 22 11:47:21.645: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013b80f0 exit status 1 true [0xc0013ac000 0xc0013ac018 0xc0013ac030] [0xc0013ac000 0xc0013ac018 0xc0013ac030] [0xc0013ac010 0xc0013ac028] [0x935700 0x935700] 0xc0015a8c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:47:31.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:47:31.807: INFO: rc: 1 Jan 22 11:47:31.808: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013b8240 exit status 1 true [0xc0013ac048 0xc0013ac060 0xc0013ac078] [0xc0013ac048 0xc0013ac060 0xc0013ac078] [0xc0013ac058 0xc0013ac070] [0x935700 0x935700] 0xc0015a9440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:47:41.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:47:42.002: INFO: rc: 1 Jan 22 11:47:42.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b843c0 exit status 1 true [0xc0000ee1e8 0xc001a9e008 0xc001a9e020] [0xc0000ee1e8 0xc001a9e008 0xc001a9e020] [0xc001a9e000 0xc001a9e018] [0x935700 0x935700] 0xc0015a8c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:47:52.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:47:52.156: INFO: rc: 1 Jan 22 11:47:52.156: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b84570 exit status 1 true [0xc001a9e028 0xc001a9e040 0xc001a9e058] [0xc001a9e028 0xc001a9e040 0xc001a9e058] [0xc001a9e038 0xc001a9e050] [0x935700 0x935700] 0xc0015a9440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:48:02.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:48:02.321: INFO: rc: 1 Jan 22 11:48:02.321: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b846f0 exit status 1 true [0xc001a9e060 0xc001a9e078 0xc001a9e090] [0xc001a9e060 0xc001a9e078 0xc001a9e090] [0xc001a9e070 0xc001a9e088] [0x935700 0x935700] 0xc00176a960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:48:12.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:48:12.512: INFO: rc: 1 Jan 22 11:48:12.513: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001e66150 exit status 1 true [0xc00041e130 0xc00041e2a8 0xc00041e308] [0xc00041e130 0xc00041e2a8 0xc00041e308] [0xc00041e200 0xc00041e2e0] [0x935700 0x935700] 0xc0018ac8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:48:22.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:48:22.853: INFO: rc: 1 Jan 22 11:48:22.853: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001e66270 exit status 1 true [0xc00041e350 0xc00041e3f8 0xc00041e560] [0xc00041e350 0xc00041e3f8 0xc00041e560] [0xc00041e3d0 0xc00041e4a0] [0x935700 0x935700] 0xc0018adec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:48:32.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:48:32.972: INFO: rc: 1 Jan 22 11:48:32.972: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013b8120 exit status 1 true [0xc0013ac000 0xc0013ac018 0xc0013ac030] [0xc0013ac000 0xc0013ac018 0xc0013ac030] [0xc0013ac010 0xc0013ac028] [0x935700 0x935700] 0xc000f823c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:48:42.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:48:43.167: INFO: rc: 1 Jan 22 11:48:43.167: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013b82d0 exit status 1 true [0xc0013ac048 0xc0013ac060 0xc0013ac078] [0xc0013ac048 0xc0013ac060 0xc0013ac078] [0xc0013ac058 0xc0013ac070] [0x935700 0x935700] 0xc000f826c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:48:53.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:48:53.345: INFO: rc: 1 Jan 22 11:48:53.346: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013b8420 exit status 1 true [0xc0013ac080 0xc0013ac098 0xc0013ac0b0] [0xc0013ac080 0xc0013ac098 0xc0013ac0b0] [0xc0013ac090 0xc0013ac0a8] [0x935700 0x935700] 0xc000f82960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:49:03.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:49:03.509: INFO: rc: 1 Jan 22 11:49:03.509: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b848a0 exit status 1 true [0xc001a9e098 0xc001a9e0b0 0xc001a9e0c8] [0xc001a9e098 0xc001a9e0b0 0xc001a9e0c8] [0xc001a9e0a8 0xc001a9e0c0] [0x935700 0x935700] 0xc00176ad80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:49:13.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:49:13.700: INFO: rc: 1 Jan 22 11:49:13.700: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001e665a0 exit status 1 true [0xc00041e5c0 0xc00041e6d8 0xc00041e788] [0xc00041e5c0 0xc00041e6d8 0xc00041e788] [0xc00041e6a8 0xc00041e770] [0x935700 0x935700] 0xc0014ba240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:49:23.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:49:23.947: INFO: rc: 1 Jan 22 11:49:23.948: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b84a20 exit status 1 true [0xc001a9e0d0 0xc001a9e0e8 0xc001a9e100] [0xc001a9e0d0 0xc001a9e0e8 0xc001a9e100] [0xc001a9e0e0 0xc001a9e0f8] [0x935700 0x935700] 0xc00176b1a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:49:33.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:49:34.081: INFO: rc: 1 Jan 22 11:49:34.081: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00220c300 exit status 1 true [0xc00000e010 0xc00000ec50 0xc00000ecd8] [0xc00000e010 0xc00000ec50 0xc00000ecd8] [0xc00000ec20 0xc00000ecb0] [0x935700 0x935700] 0xc002112240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 22 11:49:44.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5pbhn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 22 11:49:44.215: INFO: rc: 1 Jan 22 11:49:44.215: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Jan 22 11:49:44.216: INFO: Scaling statefulset ss to 0 Jan 22 11:49:44.234: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 22 11:49:44.239: INFO: Deleting all statefulset in ns e2e-tests-statefulset-5pbhn Jan 22 11:49:44.243: INFO: Scaling statefulset ss to 0 Jan 22 11:49:44.254: INFO: Waiting for statefulset status.replicas updated to 0 Jan 22 11:49:44.257: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:49:44.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-5pbhn" for this suite. Jan 22 11:49:52.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:49:52.514: INFO: namespace: e2e-tests-statefulset-5pbhn, resource: bindings, ignored listing per whitelist Jan 22 11:49:52.607: INFO: namespace e2e-tests-statefulset-5pbhn deletion completed in 8.269530413s • [SLOW TEST:371.093 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:49:52.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Jan 22 11:49:52.763: INFO: Waiting up to 5m0s for pod "client-containers-4d357d2b-3d0d-11ea-ad91-0242ac110005" in namespace "e2e-tests-containers-m8mmj" to be "success or failure" Jan 22 11:49:52.779: INFO: Pod "client-containers-4d357d2b-3d0d-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.606306ms Jan 22 11:49:54.802: INFO: Pod "client-containers-4d357d2b-3d0d-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039131061s Jan 22 11:49:56.824: INFO: Pod "client-containers-4d357d2b-3d0d-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061207287s Jan 22 11:49:58.873: INFO: Pod "client-containers-4d357d2b-3d0d-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110219457s Jan 22 11:50:00.947: INFO: Pod "client-containers-4d357d2b-3d0d-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.184609312s STEP: Saw pod success Jan 22 11:50:00.948: INFO: Pod "client-containers-4d357d2b-3d0d-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:50:00.963: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-4d357d2b-3d0d-11ea-ad91-0242ac110005 container test-container: STEP: delete the pod Jan 22 11:50:01.133: INFO: Waiting for pod client-containers-4d357d2b-3d0d-11ea-ad91-0242ac110005 to disappear Jan 22 11:50:01.168: INFO: Pod client-containers-4d357d2b-3d0d-11ea-ad91-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:50:01.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-m8mmj" for this suite. Jan 22 11:50:07.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:50:07.349: INFO: namespace: e2e-tests-containers-m8mmj, resource: bindings, ignored listing per whitelist Jan 22 11:50:07.528: INFO: namespace e2e-tests-containers-m8mmj deletion completed in 6.343181753s • [SLOW TEST:14.921 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:50:07.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-q8gnc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-q8gnc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-q8gnc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-q8gnc;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-q8gnc.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-q8gnc.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-q8gnc.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-q8gnc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-q8gnc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-q8gnc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-q8gnc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q8gnc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-q8gnc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-q8gnc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-q8gnc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-q8gnc.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-q8gnc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 186.242.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.242.186_udp@PTR;check="$$(dig +tcp +noall +answer +search 186.242.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.242.186_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-q8gnc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-q8gnc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-q8gnc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-q8gnc;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-q8gnc.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-q8gnc.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-q8gnc.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-q8gnc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-q8gnc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q8gnc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-q8gnc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q8gnc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-q8gnc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-q8gnc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-q8gnc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-q8gnc.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-q8gnc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 186.242.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.242.186_udp@PTR;check="$$(dig +tcp +noall +answer +search 186.242.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.242.186_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 22 11:50:24.196: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-q8gnc/dns-test-5633d905-3d0d-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-5633d905-3d0d-11ea-ad91-0242ac110005) Jan 22 11:50:24.201: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-q8gnc/dns-test-5633d905-3d0d-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-5633d905-3d0d-11ea-ad91-0242ac110005) Jan 22 11:50:24.209: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q8gnc from pod e2e-tests-dns-q8gnc/dns-test-5633d905-3d0d-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-5633d905-3d0d-11ea-ad91-0242ac110005) Jan 22 11:50:24.215: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q8gnc from pod e2e-tests-dns-q8gnc/dns-test-5633d905-3d0d-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-5633d905-3d0d-11ea-ad91-0242ac110005) Jan 22 11:50:24.223: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q8gnc.svc from pod e2e-tests-dns-q8gnc/dns-test-5633d905-3d0d-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-5633d905-3d0d-11ea-ad91-0242ac110005) Jan 22 11:50:24.230: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q8gnc.svc from pod e2e-tests-dns-q8gnc/dns-test-5633d905-3d0d-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-5633d905-3d0d-11ea-ad91-0242ac110005) Jan 22 11:50:24.235: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q8gnc.svc from pod e2e-tests-dns-q8gnc/dns-test-5633d905-3d0d-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-5633d905-3d0d-11ea-ad91-0242ac110005) Jan 22 11:50:24.240: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q8gnc.svc from pod e2e-tests-dns-q8gnc/dns-test-5633d905-3d0d-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-5633d905-3d0d-11ea-ad91-0242ac110005) Jan 22 11:50:24.244: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-q8gnc.svc from pod e2e-tests-dns-q8gnc/dns-test-5633d905-3d0d-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-5633d905-3d0d-11ea-ad91-0242ac110005) Jan 22 11:50:24.249: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-q8gnc.svc from pod e2e-tests-dns-q8gnc/dns-test-5633d905-3d0d-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-5633d905-3d0d-11ea-ad91-0242ac110005) Jan 22 11:50:24.255: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-q8gnc/dns-test-5633d905-3d0d-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-5633d905-3d0d-11ea-ad91-0242ac110005) Jan 22 11:50:24.260: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-q8gnc/dns-test-5633d905-3d0d-11ea-ad91-0242ac110005: the server could not find the requested resource (get pods dns-test-5633d905-3d0d-11ea-ad91-0242ac110005) Jan 22 11:50:24.271: INFO: Lookups using e2e-tests-dns-q8gnc/dns-test-5633d905-3d0d-11ea-ad91-0242ac110005 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-q8gnc jessie_tcp@dns-test-service.e2e-tests-dns-q8gnc jessie_udp@dns-test-service.e2e-tests-dns-q8gnc.svc jessie_tcp@dns-test-service.e2e-tests-dns-q8gnc.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q8gnc.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q8gnc.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-q8gnc.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-q8gnc.svc jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 22 11:50:29.545: INFO: DNS probes using e2e-tests-dns-q8gnc/dns-test-5633d905-3d0d-11ea-ad91-0242ac110005 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:50:30.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-q8gnc" for this suite. Jan 22 11:50:38.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:50:38.411: INFO: namespace: e2e-tests-dns-q8gnc, resource: bindings, ignored listing per whitelist Jan 22 11:50:38.414: INFO: namespace e2e-tests-dns-q8gnc deletion completed in 8.231025576s • [SLOW TEST:30.886 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:50:38.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 22 11:50:38.873: INFO: Number of nodes with available pods: 0 Jan 22 11:50:38.873: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:50:39.895: INFO: Number of nodes with available pods: 0 Jan 22 11:50:39.895: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:50:40.913: INFO: Number of nodes with available pods: 0 Jan 22 11:50:40.913: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:50:41.926: INFO: Number of nodes with available pods: 0 Jan 22 11:50:41.926: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:50:42.927: INFO: Number of nodes with available pods: 0 Jan 22 11:50:42.928: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:50:44.346: INFO: Number of nodes with available pods: 0 Jan 22 11:50:44.346: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:50:45.728: INFO: Number of nodes with available pods: 0 Jan 22 11:50:45.728: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:50:46.013: INFO: Number of nodes with available pods: 0 Jan 22 11:50:46.013: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:50:46.894: INFO: Number of nodes with available pods: 0 Jan 22 11:50:46.894: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:50:47.900: INFO: Number of nodes with available pods: 0 Jan 22 11:50:47.900: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 22 11:50:48.899: INFO: Number of nodes with available pods: 1 Jan 22 11:50:48.899: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 22 11:50:49.057: INFO: Number of nodes with available pods: 1 Jan 22 11:50:49.057: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-vfcgx, will wait for the garbage collector to delete the pods Jan 22 11:50:50.178: INFO: Deleting DaemonSet.extensions daemon-set took: 15.15718ms Jan 22 11:50:50.979: INFO: Terminating DaemonSet.extensions daemon-set pods took: 800.991392ms Jan 22 11:50:56.787: INFO: Number of nodes with available pods: 0 Jan 22 11:50:56.787: INFO: Number of running nodes: 0, number of available pods: 0 Jan 22 11:50:56.792: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-vfcgx/daemonsets","resourceVersion":"19071940"},"items":null} Jan 22 11:50:56.796: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-vfcgx/pods","resourceVersion":"19071940"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:50:56.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-vfcgx" for this suite. Jan 22 11:51:04.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:51:04.876: INFO: namespace: e2e-tests-daemonsets-vfcgx, resource: bindings, ignored listing per whitelist Jan 22 11:51:05.049: INFO: namespace e2e-tests-daemonsets-vfcgx deletion completed in 8.232232827s • [SLOW TEST:26.634 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:51:05.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 22 11:51:05.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-2v4cw' Jan 22 11:51:07.013: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 22 11:51:07.013: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 22 11:51:09.094: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-4bhql] Jan 22 11:51:09.095: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-4bhql" in namespace "e2e-tests-kubectl-2v4cw" to be "running and ready" Jan 22 11:51:09.098: INFO: Pod "e2e-test-nginx-rc-4bhql": Phase="Pending", Reason="", readiness=false. Elapsed: 3.89ms Jan 22 11:51:11.107: INFO: Pod "e2e-test-nginx-rc-4bhql": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012434481s Jan 22 11:51:13.119: INFO: Pod "e2e-test-nginx-rc-4bhql": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024139084s Jan 22 11:51:15.135: INFO: Pod "e2e-test-nginx-rc-4bhql": Phase="Running", Reason="", readiness=true. Elapsed: 6.040082667s Jan 22 11:51:15.135: INFO: Pod "e2e-test-nginx-rc-4bhql" satisfied condition "running and ready" Jan 22 11:51:15.135: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-4bhql] Jan 22 11:51:15.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-2v4cw' Jan 22 11:51:15.360: INFO: stderr: "" Jan 22 11:51:15.361: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jan 22 11:51:15.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-2v4cw' Jan 22 11:51:15.539: INFO: stderr: "" Jan 22 11:51:15.539: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:51:15.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2v4cw" for this suite. Jan 22 11:51:39.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:51:39.686: INFO: namespace: e2e-tests-kubectl-2v4cw, resource: bindings, ignored listing per whitelist Jan 22 11:51:39.870: INFO: namespace e2e-tests-kubectl-2v4cw deletion completed in 24.321521603s • [SLOW TEST:34.820 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:51:39.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:52:32.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-ktxwt" for this suite. Jan 22 11:52:39.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:52:39.159: INFO: namespace: e2e-tests-container-runtime-ktxwt, resource: bindings, ignored listing per whitelist Jan 22 11:52:39.182: INFO: namespace e2e-tests-container-runtime-ktxwt deletion completed in 6.20672764s • [SLOW TEST:59.312 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:52:39.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:52:49.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-4794s" for this suite. Jan 22 11:53:31.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:53:31.877: INFO: namespace: e2e-tests-kubelet-test-4794s, resource: bindings, ignored listing per whitelist Jan 22 11:53:31.996: INFO: namespace e2e-tests-kubelet-test-4794s deletion completed in 42.222670638s • [SLOW TEST:52.813 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:53:31.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jan 22 11:53:32.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:32.561: INFO: stderr: "" Jan 22 11:53:32.561: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 22 11:53:32.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:32.834: INFO: stderr: "" Jan 22 11:53:32.834: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Jan 22 11:53:37.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:37.956: INFO: stderr: "" Jan 22 11:53:37.956: INFO: stdout: "update-demo-nautilus-9wlzd update-demo-nautilus-skc9x " Jan 22 11:53:37.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wlzd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:38.057: INFO: stderr: "" Jan 22 11:53:38.057: INFO: stdout: "" Jan 22 11:53:38.057: INFO: update-demo-nautilus-9wlzd is created but not running Jan 22 11:53:43.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:43.242: INFO: stderr: "" Jan 22 11:53:43.242: INFO: stdout: "update-demo-nautilus-9wlzd update-demo-nautilus-skc9x " Jan 22 11:53:43.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wlzd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:43.369: INFO: stderr: "" Jan 22 11:53:43.369: INFO: stdout: "true" Jan 22 11:53:43.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wlzd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:43.486: INFO: stderr: "" Jan 22 11:53:43.486: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 11:53:43.486: INFO: validating pod update-demo-nautilus-9wlzd Jan 22 11:53:43.506: INFO: got data: { "image": "nautilus.jpg" } Jan 22 11:53:43.506: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 11:53:43.506: INFO: update-demo-nautilus-9wlzd is verified up and running Jan 22 11:53:43.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-skc9x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:43.647: INFO: stderr: "" Jan 22 11:53:43.647: INFO: stdout: "true" Jan 22 11:53:43.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-skc9x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:43.835: INFO: stderr: "" Jan 22 11:53:43.835: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 11:53:43.836: INFO: validating pod update-demo-nautilus-skc9x Jan 22 11:53:43.867: INFO: got data: { "image": "nautilus.jpg" } Jan 22 11:53:43.868: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 11:53:43.868: INFO: update-demo-nautilus-skc9x is verified up and running STEP: scaling down the replication controller Jan 22 11:53:43.873: INFO: scanned /root for discovery docs: Jan 22 11:53:43.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:45.213: INFO: stderr: "" Jan 22 11:53:45.214: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 22 11:53:45.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:45.333: INFO: stderr: "" Jan 22 11:53:45.334: INFO: stdout: "update-demo-nautilus-9wlzd update-demo-nautilus-skc9x " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 22 11:53:50.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:50.501: INFO: stderr: "" Jan 22 11:53:50.501: INFO: stdout: "update-demo-nautilus-9wlzd update-demo-nautilus-skc9x " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 22 11:53:55.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:55.812: INFO: stderr: "" Jan 22 11:53:55.812: INFO: stdout: "update-demo-nautilus-9wlzd " Jan 22 11:53:55.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wlzd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:55.941: INFO: stderr: "" Jan 22 11:53:55.941: INFO: stdout: "true" Jan 22 11:53:55.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wlzd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:56.033: INFO: stderr: "" Jan 22 11:53:56.033: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 11:53:56.033: INFO: validating pod update-demo-nautilus-9wlzd Jan 22 11:53:56.044: INFO: got data: { "image": "nautilus.jpg" } Jan 22 11:53:56.044: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 11:53:56.044: INFO: update-demo-nautilus-9wlzd is verified up and running STEP: scaling up the replication controller Jan 22 11:53:56.047: INFO: scanned /root for discovery docs: Jan 22 11:53:56.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:57.385: INFO: stderr: "" Jan 22 11:53:57.385: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 22 11:53:57.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:57.633: INFO: stderr: "" Jan 22 11:53:57.633: INFO: stdout: "update-demo-nautilus-9wlzd update-demo-nautilus-tkhtf " Jan 22 11:53:57.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wlzd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:57.800: INFO: stderr: "" Jan 22 11:53:57.800: INFO: stdout: "true" Jan 22 11:53:57.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wlzd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:57.925: INFO: stderr: "" Jan 22 11:53:57.925: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 11:53:57.925: INFO: validating pod update-demo-nautilus-9wlzd Jan 22 11:53:57.944: INFO: got data: { "image": "nautilus.jpg" } Jan 22 11:53:57.945: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 11:53:57.945: INFO: update-demo-nautilus-9wlzd is verified up and running Jan 22 11:53:57.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tkhtf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:53:58.277: INFO: stderr: "" Jan 22 11:53:58.277: INFO: stdout: "" Jan 22 11:53:58.277: INFO: update-demo-nautilus-tkhtf is created but not running Jan 22 11:54:03.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:54:03.580: INFO: stderr: "" Jan 22 11:54:03.580: INFO: stdout: "update-demo-nautilus-9wlzd update-demo-nautilus-tkhtf " Jan 22 11:54:03.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wlzd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:54:03.759: INFO: stderr: "" Jan 22 11:54:03.759: INFO: stdout: "true" Jan 22 11:54:03.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wlzd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:54:03.930: INFO: stderr: "" Jan 22 11:54:03.930: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 11:54:03.930: INFO: validating pod update-demo-nautilus-9wlzd Jan 22 11:54:03.940: INFO: got data: { "image": "nautilus.jpg" } Jan 22 11:54:03.941: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 11:54:03.941: INFO: update-demo-nautilus-9wlzd is verified up and running Jan 22 11:54:03.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tkhtf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:54:04.086: INFO: stderr: "" Jan 22 11:54:04.086: INFO: stdout: "" Jan 22 11:54:04.086: INFO: update-demo-nautilus-tkhtf is created but not running Jan 22 11:54:09.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:54:09.257: INFO: stderr: "" Jan 22 11:54:09.257: INFO: stdout: "update-demo-nautilus-9wlzd update-demo-nautilus-tkhtf " Jan 22 11:54:09.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wlzd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:54:09.414: INFO: stderr: "" Jan 22 11:54:09.414: INFO: stdout: "true" Jan 22 11:54:09.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wlzd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:54:09.609: INFO: stderr: "" Jan 22 11:54:09.609: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 11:54:09.609: INFO: validating pod update-demo-nautilus-9wlzd Jan 22 11:54:09.617: INFO: got data: { "image": "nautilus.jpg" } Jan 22 11:54:09.617: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 11:54:09.617: INFO: update-demo-nautilus-9wlzd is verified up and running Jan 22 11:54:09.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tkhtf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:54:09.729: INFO: stderr: "" Jan 22 11:54:09.729: INFO: stdout: "true" Jan 22 11:54:09.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tkhtf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:54:09.869: INFO: stderr: "" Jan 22 11:54:09.869: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 22 11:54:09.869: INFO: validating pod update-demo-nautilus-tkhtf Jan 22 11:54:09.884: INFO: got data: { "image": "nautilus.jpg" } Jan 22 11:54:09.884: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 22 11:54:09.884: INFO: update-demo-nautilus-tkhtf is verified up and running STEP: using delete to clean up resources Jan 22 11:54:09.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:54:10.058: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 22 11:54:10.058: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 22 11:54:10.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-lh29f' Jan 22 11:54:10.242: INFO: stderr: "No resources found.\n" Jan 22 11:54:10.242: INFO: stdout: "" Jan 22 11:54:10.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-lh29f -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 22 11:54:10.494: INFO: stderr: "" Jan 22 11:54:10.495: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:54:10.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lh29f" for this suite. Jan 22 11:54:34.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:54:34.697: INFO: namespace: e2e-tests-kubectl-lh29f, resource: bindings, ignored listing per whitelist Jan 22 11:54:34.730: INFO: namespace e2e-tests-kubectl-lh29f deletion completed in 24.188965166s • [SLOW TEST:62.734 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:54:34.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 22 11:54:34.939: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-92r6h,SelfLink:/api/v1/namespaces/e2e-tests-watch-92r6h/configmaps/e2e-watch-test-label-changed,UID:f5641092-3d0d-11ea-a994-fa163e34d433,ResourceVersion:19072427,Generation:0,CreationTimestamp:2020-01-22 11:54:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 22 11:54:34.939: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-92r6h,SelfLink:/api/v1/namespaces/e2e-tests-watch-92r6h/configmaps/e2e-watch-test-label-changed,UID:f5641092-3d0d-11ea-a994-fa163e34d433,ResourceVersion:19072428,Generation:0,CreationTimestamp:2020-01-22 11:54:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 22 11:54:34.939: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-92r6h,SelfLink:/api/v1/namespaces/e2e-tests-watch-92r6h/configmaps/e2e-watch-test-label-changed,UID:f5641092-3d0d-11ea-a994-fa163e34d433,ResourceVersion:19072429,Generation:0,CreationTimestamp:2020-01-22 11:54:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 22 11:54:45.232: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-92r6h,SelfLink:/api/v1/namespaces/e2e-tests-watch-92r6h/configmaps/e2e-watch-test-label-changed,UID:f5641092-3d0d-11ea-a994-fa163e34d433,ResourceVersion:19072443,Generation:0,CreationTimestamp:2020-01-22 11:54:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 22 11:54:45.232: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-92r6h,SelfLink:/api/v1/namespaces/e2e-tests-watch-92r6h/configmaps/e2e-watch-test-label-changed,UID:f5641092-3d0d-11ea-a994-fa163e34d433,ResourceVersion:19072444,Generation:0,CreationTimestamp:2020-01-22 11:54:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 22 11:54:45.232: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-92r6h,SelfLink:/api/v1/namespaces/e2e-tests-watch-92r6h/configmaps/e2e-watch-test-label-changed,UID:f5641092-3d0d-11ea-a994-fa163e34d433,ResourceVersion:19072445,Generation:0,CreationTimestamp:2020-01-22 11:54:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:54:45.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-92r6h" for this suite. Jan 22 11:54:51.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:54:51.913: INFO: namespace: e2e-tests-watch-92r6h, resource: bindings, ignored listing per whitelist Jan 22 11:54:51.954: INFO: namespace e2e-tests-watch-92r6h deletion completed in 6.702457225s • [SLOW TEST:17.223 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:54:51.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 22 11:54:52.103: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ffa0ab48-3d0d-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-2wtdh" to be "success or failure" Jan 22 11:54:52.193: INFO: Pod "downwardapi-volume-ffa0ab48-3d0d-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 89.580072ms Jan 22 11:54:54.209: INFO: Pod "downwardapi-volume-ffa0ab48-3d0d-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105600131s Jan 22 11:54:56.232: INFO: Pod "downwardapi-volume-ffa0ab48-3d0d-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128963924s Jan 22 11:54:58.378: INFO: Pod "downwardapi-volume-ffa0ab48-3d0d-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.274793117s Jan 22 11:55:00.408: INFO: Pod "downwardapi-volume-ffa0ab48-3d0d-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.304313947s Jan 22 11:55:02.428: INFO: Pod "downwardapi-volume-ffa0ab48-3d0d-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.324794713s STEP: Saw pod success Jan 22 11:55:02.428: INFO: Pod "downwardapi-volume-ffa0ab48-3d0d-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:55:02.439: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ffa0ab48-3d0d-11ea-ad91-0242ac110005 container client-container: STEP: delete the pod Jan 22 11:55:02.692: INFO: Waiting for pod downwardapi-volume-ffa0ab48-3d0d-11ea-ad91-0242ac110005 to disappear Jan 22 11:55:02.707: INFO: Pod downwardapi-volume-ffa0ab48-3d0d-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:55:02.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2wtdh" for this suite. Jan 22 11:55:08.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:55:08.957: INFO: namespace: e2e-tests-projected-2wtdh, resource: bindings, ignored listing per whitelist Jan 22 11:55:09.053: INFO: namespace e2e-tests-projected-2wtdh deletion completed in 6.33456761s • [SLOW TEST:17.100 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:55:09.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 22 11:55:09.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-gtb2p' Jan 22 11:55:09.287: INFO: stderr: "" Jan 22 11:55:09.287: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jan 22 11:55:19.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-gtb2p -o json' Jan 22 11:55:19.509: INFO: stderr: "" Jan 22 11:55:19.510: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-22T11:55:09Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-gtb2p\",\n \"resourceVersion\": \"19072523\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-gtb2p/pods/e2e-test-nginx-pod\",\n \"uid\": \"09deee5a-3d0e-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-dlz98\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-dlz98\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-dlz98\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-22T11:55:09Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-22T11:55:16Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-22T11:55:16Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-22T11:55:09Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://e9e73717930ee79781bd5326a2f056f7f48f10589d5e4ca2f0a0982b8e969f62\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-22T11:55:16Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-22T11:55:09Z\"\n }\n}\n" STEP: replace the image in the pod Jan 22 11:55:19.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-gtb2p' Jan 22 11:55:19.935: INFO: stderr: "" Jan 22 11:55:19.935: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jan 22 11:55:19.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-gtb2p' Jan 22 11:55:27.571: INFO: stderr: "" Jan 22 11:55:27.572: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:55:27.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gtb2p" for this suite. Jan 22 11:55:33.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:55:34.758: INFO: namespace: e2e-tests-kubectl-gtb2p, resource: bindings, ignored listing per whitelist Jan 22 11:55:34.895: INFO: namespace e2e-tests-kubectl-gtb2p deletion completed in 7.301749055s • [SLOW TEST:25.842 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:55:34.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 22 11:55:35.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lvfw2' Jan 22 11:55:35.403: INFO: stderr: "" Jan 22 11:55:35.403: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 22 11:55:36.417: INFO: Selector matched 1 pods for map[app:redis] Jan 22 11:55:36.417: INFO: Found 0 / 1 Jan 22 11:55:37.420: INFO: Selector matched 1 pods for map[app:redis] Jan 22 11:55:37.421: INFO: Found 0 / 1 Jan 22 11:55:38.451: INFO: Selector matched 1 pods for map[app:redis] Jan 22 11:55:38.451: INFO: Found 0 / 1 Jan 22 11:55:39.439: INFO: Selector matched 1 pods for map[app:redis] Jan 22 11:55:39.439: INFO: Found 0 / 1 Jan 22 11:55:40.423: INFO: Selector matched 1 pods for map[app:redis] Jan 22 11:55:40.423: INFO: Found 0 / 1 Jan 22 11:55:41.929: INFO: Selector matched 1 pods for map[app:redis] Jan 22 11:55:41.929: INFO: Found 0 / 1 Jan 22 11:55:42.701: INFO: Selector matched 1 pods for map[app:redis] Jan 22 11:55:42.701: INFO: Found 0 / 1 Jan 22 11:55:43.413: INFO: Selector matched 1 pods for map[app:redis] Jan 22 11:55:43.413: INFO: Found 0 / 1 Jan 22 11:55:44.446: INFO: Selector matched 1 pods for map[app:redis] Jan 22 11:55:44.446: INFO: Found 0 / 1 Jan 22 11:55:45.415: INFO: Selector matched 1 pods for map[app:redis] Jan 22 11:55:45.415: INFO: Found 1 / 1 Jan 22 11:55:45.415: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 22 11:55:45.422: INFO: Selector matched 1 pods for map[app:redis] Jan 22 11:55:45.422: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 22 11:55:45.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-5jww2 --namespace=e2e-tests-kubectl-lvfw2 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 22 11:55:45.647: INFO: stderr: "" Jan 22 11:55:45.647: INFO: stdout: "pod/redis-master-5jww2 patched\n" STEP: checking annotations Jan 22 11:55:45.657: INFO: Selector matched 1 pods for map[app:redis] Jan 22 11:55:45.657: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:55:45.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lvfw2" for this suite. Jan 22 11:56:09.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:56:09.866: INFO: namespace: e2e-tests-kubectl-lvfw2, resource: bindings, ignored listing per whitelist Jan 22 11:56:09.917: INFO: namespace e2e-tests-kubectl-lvfw2 deletion completed in 24.255456149s • [SLOW TEST:35.022 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:56:09.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-2e2fec1d-3d0e-11ea-ad91-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 22 11:56:10.228: INFO: Waiting up to 5m0s for pod "pod-configmaps-2e32019f-3d0e-11ea-ad91-0242ac110005" in namespace "e2e-tests-configmap-7w9lj" to be "success or failure" Jan 22 11:56:10.295: INFO: Pod "pod-configmaps-2e32019f-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 66.721614ms Jan 22 11:56:12.309: INFO: Pod "pod-configmaps-2e32019f-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080829422s Jan 22 11:56:14.322: INFO: Pod "pod-configmaps-2e32019f-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094528646s Jan 22 11:56:16.330: INFO: Pod "pod-configmaps-2e32019f-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102222116s Jan 22 11:56:18.359: INFO: Pod "pod-configmaps-2e32019f-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130745577s Jan 22 11:56:20.416: INFO: Pod "pod-configmaps-2e32019f-3d0e-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.187871633s STEP: Saw pod success Jan 22 11:56:20.416: INFO: Pod "pod-configmaps-2e32019f-3d0e-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:56:20.429: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-2e32019f-3d0e-11ea-ad91-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 22 11:56:20.958: INFO: Waiting for pod pod-configmaps-2e32019f-3d0e-11ea-ad91-0242ac110005 to disappear Jan 22 11:56:20.971: INFO: Pod pod-configmaps-2e32019f-3d0e-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:56:20.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7w9lj" for this suite. Jan 22 11:56:27.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:56:27.112: INFO: namespace: e2e-tests-configmap-7w9lj, resource: bindings, ignored listing per whitelist Jan 22 11:56:27.207: INFO: namespace e2e-tests-configmap-7w9lj deletion completed in 6.216615759s • [SLOW TEST:17.290 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:56:27.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Jan 22 11:56:27.374: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:56:27.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zktfc" for this suite. Jan 22 11:56:33.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:56:33.719: INFO: namespace: e2e-tests-kubectl-zktfc, resource: bindings, ignored listing per whitelist Jan 22 11:56:33.817: INFO: namespace e2e-tests-kubectl-zktfc deletion completed in 6.261485497s • [SLOW TEST:6.610 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:56:33.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 22 11:56:33.997: INFO: Waiting up to 5m0s for pod "pod-3c5d1742-3d0e-11ea-ad91-0242ac110005" in namespace "e2e-tests-emptydir-kx4rn" to be "success or failure" Jan 22 11:56:34.086: INFO: Pod "pod-3c5d1742-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 89.032347ms Jan 22 11:56:36.121: INFO: Pod "pod-3c5d1742-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124492714s Jan 22 11:56:38.133: INFO: Pod "pod-3c5d1742-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136611981s Jan 22 11:56:40.612: INFO: Pod "pod-3c5d1742-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.615063836s Jan 22 11:56:42.632: INFO: Pod "pod-3c5d1742-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.635421716s Jan 22 11:56:44.675: INFO: Pod "pod-3c5d1742-3d0e-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.678512003s STEP: Saw pod success Jan 22 11:56:44.676: INFO: Pod "pod-3c5d1742-3d0e-11ea-ad91-0242ac110005" satisfied condition "success or failure" Jan 22 11:56:44.690: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3c5d1742-3d0e-11ea-ad91-0242ac110005 container test-container: STEP: delete the pod Jan 22 11:56:45.659: INFO: Waiting for pod pod-3c5d1742-3d0e-11ea-ad91-0242ac110005 to disappear Jan 22 11:56:45.998: INFO: Pod pod-3c5d1742-3d0e-11ea-ad91-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:56:45.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kx4rn" for this suite. Jan 22 11:56:52.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:56:52.452: INFO: namespace: e2e-tests-emptydir-kx4rn, resource: bindings, ignored listing per whitelist Jan 22 11:56:52.455: INFO: namespace e2e-tests-emptydir-kx4rn deletion completed in 6.392666683s • [SLOW TEST:18.637 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:56:52.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 22 11:56:52.647: INFO: Creating ReplicaSet my-hostname-basic-477da37d-3d0e-11ea-ad91-0242ac110005 Jan 22 11:56:52.767: INFO: Pod name my-hostname-basic-477da37d-3d0e-11ea-ad91-0242ac110005: Found 0 pods out of 1 Jan 22 11:56:58.052: INFO: Pod name my-hostname-basic-477da37d-3d0e-11ea-ad91-0242ac110005: Found 1 pods out of 1 Jan 22 11:56:58.052: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-477da37d-3d0e-11ea-ad91-0242ac110005" is running Jan 22 11:57:02.083: INFO: Pod "my-hostname-basic-477da37d-3d0e-11ea-ad91-0242ac110005-ctkdx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 11:56:52 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 11:56:52 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-477da37d-3d0e-11ea-ad91-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 11:56:52 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-477da37d-3d0e-11ea-ad91-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 11:56:52 +0000 UTC Reason: Message:}]) Jan 22 11:57:02.083: INFO: Trying to dial the pod Jan 22 11:57:07.126: INFO: Controller my-hostname-basic-477da37d-3d0e-11ea-ad91-0242ac110005: Got expected result from replica 1 [my-hostname-basic-477da37d-3d0e-11ea-ad91-0242ac110005-ctkdx]: "my-hostname-basic-477da37d-3d0e-11ea-ad91-0242ac110005-ctkdx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 22 11:57:07.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-pvxsm" for this suite. Jan 22 11:57:13.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 22 11:57:13.284: INFO: namespace: e2e-tests-replicaset-pvxsm, resource: bindings, ignored listing per whitelist Jan 22 11:57:13.312: INFO: namespace e2e-tests-replicaset-pvxsm deletion completed in 6.1750911s • [SLOW TEST:20.856 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 22 11:57:13.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 22 11:57:13.494: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 15.823842ms)
Jan 22 11:57:13.501: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.972652ms)
Jan 22 11:57:13.509: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.342732ms)
Jan 22 11:57:13.514: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.275958ms)
Jan 22 11:57:13.520: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.048972ms)
Jan 22 11:57:13.525: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.894697ms)
Jan 22 11:57:13.563: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 37.262285ms)
Jan 22 11:57:13.569: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.712253ms)
Jan 22 11:57:13.574: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.832796ms)
Jan 22 11:57:13.579: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.052943ms)
Jan 22 11:57:13.586: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.787762ms)
Jan 22 11:57:13.593: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.626838ms)
Jan 22 11:57:13.600: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.631371ms)
Jan 22 11:57:13.613: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.666507ms)
Jan 22 11:57:13.622: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.589819ms)
Jan 22 11:57:13.631: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.687001ms)
Jan 22 11:57:13.637: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.298124ms)
Jan 22 11:57:13.644: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.013702ms)
Jan 22 11:57:13.649: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.212788ms)
Jan 22 11:57:13.655: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.394815ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 11:57:13.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-mrqb5" for this suite.
Jan 22 11:57:19.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 11:57:19.732: INFO: namespace: e2e-tests-proxy-mrqb5, resource: bindings, ignored listing per whitelist
Jan 22 11:57:19.907: INFO: namespace e2e-tests-proxy-mrqb5 deletion completed in 6.246843384s

• [SLOW TEST:6.595 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 11:57:19.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-tdkp
STEP: Creating a pod to test atomic-volume-subpath
Jan 22 11:57:20.109: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-tdkp" in namespace "e2e-tests-subpath-kh7mz" to be "success or failure"
Jan 22 11:57:20.125: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Pending", Reason="", readiness=false. Elapsed: 15.753377ms
Jan 22 11:57:22.144: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034681255s
Jan 22 11:57:24.447: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337673678s
Jan 22 11:57:26.476: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.366002642s
Jan 22 11:57:28.503: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.393283653s
Jan 22 11:57:30.527: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.417066726s
Jan 22 11:57:32.980: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.87050777s
Jan 22 11:57:35.052: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.942688755s
Jan 22 11:57:37.092: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Running", Reason="", readiness=false. Elapsed: 16.982879452s
Jan 22 11:57:39.111: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Running", Reason="", readiness=false. Elapsed: 19.00133222s
Jan 22 11:57:41.135: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Running", Reason="", readiness=false. Elapsed: 21.025773121s
Jan 22 11:57:43.159: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Running", Reason="", readiness=false. Elapsed: 23.049761255s
Jan 22 11:57:45.179: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Running", Reason="", readiness=false. Elapsed: 25.069383536s
Jan 22 11:57:47.195: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Running", Reason="", readiness=false. Elapsed: 27.085570591s
Jan 22 11:57:49.212: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Running", Reason="", readiness=false. Elapsed: 29.102441802s
Jan 22 11:57:51.229: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Running", Reason="", readiness=false. Elapsed: 31.119265944s
Jan 22 11:57:53.244: INFO: Pod "pod-subpath-test-configmap-tdkp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.134520059s
STEP: Saw pod success
Jan 22 11:57:53.244: INFO: Pod "pod-subpath-test-configmap-tdkp" satisfied condition "success or failure"
Jan 22 11:57:53.251: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-tdkp container test-container-subpath-configmap-tdkp: 
STEP: delete the pod
Jan 22 11:57:53.571: INFO: Waiting for pod pod-subpath-test-configmap-tdkp to disappear
Jan 22 11:57:53.593: INFO: Pod pod-subpath-test-configmap-tdkp no longer exists
STEP: Deleting pod pod-subpath-test-configmap-tdkp
Jan 22 11:57:53.594: INFO: Deleting pod "pod-subpath-test-configmap-tdkp" in namespace "e2e-tests-subpath-kh7mz"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 11:57:53.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-kh7mz" for this suite.
Jan 22 11:58:01.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 11:58:01.772: INFO: namespace: e2e-tests-subpath-kh7mz, resource: bindings, ignored listing per whitelist
Jan 22 11:58:01.990: INFO: namespace e2e-tests-subpath-kh7mz deletion completed in 8.367767869s

• [SLOW TEST:42.082 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 11:58:01.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-zq7g
STEP: Creating a pod to test atomic-volume-subpath
Jan 22 11:58:02.320: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zq7g" in namespace "e2e-tests-subpath-9gxbx" to be "success or failure"
Jan 22 11:58:02.345: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Pending", Reason="", readiness=false. Elapsed: 24.813445ms
Jan 22 11:58:04.357: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03683008s
Jan 22 11:58:06.399: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07901194s
Jan 22 11:58:08.418: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097354348s
Jan 22 11:58:10.642: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.322122832s
Jan 22 11:58:12.677: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.356669943s
Jan 22 11:58:14.689: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Pending", Reason="", readiness=false. Elapsed: 12.369195579s
Jan 22 11:58:16.705: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Running", Reason="", readiness=false. Elapsed: 14.384743525s
Jan 22 11:58:18.732: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Running", Reason="", readiness=false. Elapsed: 16.411695838s
Jan 22 11:58:20.746: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Running", Reason="", readiness=false. Elapsed: 18.426299299s
Jan 22 11:58:22.762: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Running", Reason="", readiness=false. Elapsed: 20.442248156s
Jan 22 11:58:24.777: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Running", Reason="", readiness=false. Elapsed: 22.457312612s
Jan 22 11:58:26.788: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Running", Reason="", readiness=false. Elapsed: 24.468282728s
Jan 22 11:58:28.818: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Running", Reason="", readiness=false. Elapsed: 26.497529134s
Jan 22 11:58:30.840: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Running", Reason="", readiness=false. Elapsed: 28.520202421s
Jan 22 11:58:32.861: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Running", Reason="", readiness=false. Elapsed: 30.540696752s
Jan 22 11:58:34.879: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Running", Reason="", readiness=false. Elapsed: 32.558989563s
Jan 22 11:58:36.906: INFO: Pod "pod-subpath-test-secret-zq7g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.585501553s
STEP: Saw pod success
Jan 22 11:58:36.906: INFO: Pod "pod-subpath-test-secret-zq7g" satisfied condition "success or failure"
Jan 22 11:58:36.918: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-zq7g container test-container-subpath-secret-zq7g: 
STEP: delete the pod
Jan 22 11:58:37.192: INFO: Waiting for pod pod-subpath-test-secret-zq7g to disappear
Jan 22 11:58:37.209: INFO: Pod pod-subpath-test-secret-zq7g no longer exists
STEP: Deleting pod pod-subpath-test-secret-zq7g
Jan 22 11:58:37.209: INFO: Deleting pod "pod-subpath-test-secret-zq7g" in namespace "e2e-tests-subpath-9gxbx"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 11:58:37.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-9gxbx" for this suite.
Jan 22 11:58:45.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 11:58:45.469: INFO: namespace: e2e-tests-subpath-9gxbx, resource: bindings, ignored listing per whitelist
Jan 22 11:58:45.531: INFO: namespace e2e-tests-subpath-9gxbx deletion completed in 8.244544991s

• [SLOW TEST:43.541 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 11:58:45.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan 22 11:58:45.844: INFO: Waiting up to 5m0s for pod "client-containers-8aef876e-3d0e-11ea-ad91-0242ac110005" in namespace "e2e-tests-containers-mq5nf" to be "success or failure"
Jan 22 11:58:45.856: INFO: Pod "client-containers-8aef876e-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027376ms
Jan 22 11:58:48.062: INFO: Pod "client-containers-8aef876e-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217248598s
Jan 22 11:58:50.077: INFO: Pod "client-containers-8aef876e-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23276155s
Jan 22 11:58:52.536: INFO: Pod "client-containers-8aef876e-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.69194428s
Jan 22 11:58:54.573: INFO: Pod "client-containers-8aef876e-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.728282055s
Jan 22 11:58:56.623: INFO: Pod "client-containers-8aef876e-3d0e-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.778680606s
STEP: Saw pod success
Jan 22 11:58:56.623: INFO: Pod "client-containers-8aef876e-3d0e-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 11:58:56.637: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-8aef876e-3d0e-11ea-ad91-0242ac110005 container test-container: 
STEP: delete the pod
Jan 22 11:58:56.713: INFO: Waiting for pod client-containers-8aef876e-3d0e-11ea-ad91-0242ac110005 to disappear
Jan 22 11:58:56.759: INFO: Pod client-containers-8aef876e-3d0e-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 11:58:56.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-mq5nf" for this suite.
Jan 22 11:59:04.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 11:59:04.822: INFO: namespace: e2e-tests-containers-mq5nf, resource: bindings, ignored listing per whitelist
Jan 22 11:59:04.930: INFO: namespace e2e-tests-containers-mq5nf deletion completed in 8.162700268s

• [SLOW TEST:19.398 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 11:59:04.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 11:59:05.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-thjv9" for this suite.
Jan 22 11:59:11.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 11:59:11.204: INFO: namespace: e2e-tests-services-thjv9, resource: bindings, ignored listing per whitelist
Jan 22 11:59:11.277: INFO: namespace e2e-tests-services-thjv9 deletion completed in 6.162951853s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.347 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 11:59:11.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-n6ltj/configmap-test-9a43f651-3d0e-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 22 11:59:11.545: INFO: Waiting up to 5m0s for pod "pod-configmaps-9a4486d6-3d0e-11ea-ad91-0242ac110005" in namespace "e2e-tests-configmap-n6ltj" to be "success or failure"
Jan 22 11:59:11.646: INFO: Pod "pod-configmaps-9a4486d6-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 100.743702ms
Jan 22 11:59:13.660: INFO: Pod "pod-configmaps-9a4486d6-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114699669s
Jan 22 11:59:15.679: INFO: Pod "pod-configmaps-9a4486d6-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134344426s
Jan 22 11:59:18.167: INFO: Pod "pod-configmaps-9a4486d6-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.621725822s
Jan 22 11:59:20.218: INFO: Pod "pod-configmaps-9a4486d6-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.673015316s
Jan 22 11:59:22.567: INFO: Pod "pod-configmaps-9a4486d6-3d0e-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.021752223s
STEP: Saw pod success
Jan 22 11:59:22.567: INFO: Pod "pod-configmaps-9a4486d6-3d0e-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 11:59:22.588: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-9a4486d6-3d0e-11ea-ad91-0242ac110005 container env-test: 
STEP: delete the pod
Jan 22 11:59:22.799: INFO: Waiting for pod pod-configmaps-9a4486d6-3d0e-11ea-ad91-0242ac110005 to disappear
Jan 22 11:59:22.816: INFO: Pod pod-configmaps-9a4486d6-3d0e-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 11:59:22.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-n6ltj" for this suite.
Jan 22 11:59:28.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 11:59:29.008: INFO: namespace: e2e-tests-configmap-n6ltj, resource: bindings, ignored listing per whitelist
Jan 22 11:59:29.093: INFO: namespace e2e-tests-configmap-n6ltj deletion completed in 6.247279932s

• [SLOW TEST:17.816 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 11:59:29.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 11:59:52.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-cdw92" for this suite.
Jan 22 12:00:16.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:00:16.644: INFO: namespace: e2e-tests-replication-controller-cdw92, resource: bindings, ignored listing per whitelist
Jan 22 12:00:16.672: INFO: namespace e2e-tests-replication-controller-cdw92 deletion completed in 24.275802928s

• [SLOW TEST:47.579 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:00:16.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 22 12:00:16.846: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1317ac8-3d0e-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-dvnnj" to be "success or failure"
Jan 22 12:00:16.859: INFO: Pod "downwardapi-volume-c1317ac8-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.605314ms
Jan 22 12:00:18.879: INFO: Pod "downwardapi-volume-c1317ac8-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033824963s
Jan 22 12:00:20.895: INFO: Pod "downwardapi-volume-c1317ac8-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048984825s
Jan 22 12:00:22.905: INFO: Pod "downwardapi-volume-c1317ac8-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059471106s
Jan 22 12:00:24.921: INFO: Pod "downwardapi-volume-c1317ac8-3d0e-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075373224s
Jan 22 12:00:27.097: INFO: Pod "downwardapi-volume-c1317ac8-3d0e-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.251768253s
STEP: Saw pod success
Jan 22 12:00:27.097: INFO: Pod "downwardapi-volume-c1317ac8-3d0e-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:00:27.105: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c1317ac8-3d0e-11ea-ad91-0242ac110005 container client-container: 
STEP: delete the pod
Jan 22 12:00:27.420: INFO: Waiting for pod downwardapi-volume-c1317ac8-3d0e-11ea-ad91-0242ac110005 to disappear
Jan 22 12:00:27.458: INFO: Pod downwardapi-volume-c1317ac8-3d0e-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:00:27.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dvnnj" for this suite.
Jan 22 12:00:33.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:00:33.745: INFO: namespace: e2e-tests-projected-dvnnj, resource: bindings, ignored listing per whitelist
Jan 22 12:00:33.751: INFO: namespace e2e-tests-projected-dvnnj deletion completed in 6.268248637s

• [SLOW TEST:17.079 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:00:33.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 22 12:00:34.072: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 22 12:00:34.187: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 22 12:00:39.495: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 22 12:00:43.529: INFO: Creating deployment "test-rolling-update-deployment"
Jan 22 12:00:43.548: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 22 12:00:43.567: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 22 12:00:46.400: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 22 12:00:46.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 22 12:00:48.699: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 22 12:00:50.696: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 22 12:00:52.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291243, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 22 12:00:54.959: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 22 12:00:55.215: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-qh4jg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qh4jg/deployments/test-rolling-update-deployment,UID:d11c83d3-3d0e-11ea-a994-fa163e34d433,ResourceVersion:19073305,Generation:1,CreationTimestamp:2020-01-22 12:00:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-22 12:00:43 +0000 UTC 2020-01-22 12:00:43 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-22 12:00:53 +0000 UTC 2020-01-22 12:00:43 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 22 12:00:55.223: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-qh4jg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qh4jg/replicasets/test-rolling-update-deployment-75db98fb4c,UID:d1258cac-3d0e-11ea-a994-fa163e34d433,ResourceVersion:19073296,Generation:1,CreationTimestamp:2020-01-22 12:00:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment d11c83d3-3d0e-11ea-a994-fa163e34d433 0xc001b2b897 0xc001b2b898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 22 12:00:55.223: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 22 12:00:55.224: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-qh4jg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qh4jg/replicasets/test-rolling-update-controller,UID:cb793364-3d0e-11ea-a994-fa163e34d433,ResourceVersion:19073304,Generation:2,CreationTimestamp:2020-01-22 12:00:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment d11c83d3-3d0e-11ea-a994-fa163e34d433 0xc001b2b7d7 0xc001b2b7d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 22 12:00:55.231: INFO: Pod "test-rolling-update-deployment-75db98fb4c-94tq9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-94tq9,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-qh4jg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qh4jg/pods/test-rolling-update-deployment-75db98fb4c-94tq9,UID:d1282a92-3d0e-11ea-a994-fa163e34d433,ResourceVersion:19073295,Generation:0,CreationTimestamp:2020-01-22 12:00:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c d1258cac-3d0e-11ea-a994-fa163e34d433 0xc002252b27 0xc002252b28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k4kvl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k4kvl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-k4kvl true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002252c00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002252c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:00:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:00:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:00:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:00:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-22 12:00:43 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-22 12:00:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a3cc500aa99480ccb38ecf8caba6fcfd563224cb859fea83ea84bf90265114f0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:00:55.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-qh4jg" for this suite.
Jan 22 12:01:03.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:01:03.439: INFO: namespace: e2e-tests-deployment-qh4jg, resource: bindings, ignored listing per whitelist
Jan 22 12:01:03.445: INFO: namespace e2e-tests-deployment-qh4jg deletion completed in 8.206935209s

• [SLOW TEST:29.693 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:01:03.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 22 12:01:12.450: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-dd787f7c-3d0e-11ea-ad91-0242ac110005,GenerateName:,Namespace:e2e-tests-events-d9nc4,SelfLink:/api/v1/namespaces/e2e-tests-events-d9nc4/pods/send-events-dd787f7c-3d0e-11ea-ad91-0242ac110005,UID:dd7a1d6a-3d0e-11ea-a994-fa163e34d433,ResourceVersion:19073362,Generation:0,CreationTimestamp:2020-01-22 12:01:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 271991592,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5d8ks {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5d8ks,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-5d8ks true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001283870} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001283890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:01:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:01:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:01:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:01:04 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-22 12:01:04 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-22 12:01:11 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://2bae968f761b854ca959d9beb2ff877c1ea0a9c8c98c932f6358b52aa45380ca}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 22 12:01:14.481: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 22 12:01:16.522: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:01:16.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-d9nc4" for this suite.
Jan 22 12:02:04.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:02:04.796: INFO: namespace: e2e-tests-events-d9nc4, resource: bindings, ignored listing per whitelist
Jan 22 12:02:04.845: INFO: namespace e2e-tests-events-d9nc4 deletion completed in 48.257743569s

• [SLOW TEST:61.400 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:02:04.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:02:11.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-lvm48" for this suite.
Jan 22 12:02:17.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:02:17.728: INFO: namespace: e2e-tests-namespaces-lvm48, resource: bindings, ignored listing per whitelist
Jan 22 12:02:17.855: INFO: namespace e2e-tests-namespaces-lvm48 deletion completed in 6.263088256s
STEP: Destroying namespace "e2e-tests-nsdeletetest-8r4tr" for this suite.
Jan 22 12:02:17.866: INFO: Namespace e2e-tests-nsdeletetest-8r4tr was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-wf9zm" for this suite.
Jan 22 12:02:23.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:02:23.992: INFO: namespace: e2e-tests-nsdeletetest-wf9zm, resource: bindings, ignored listing per whitelist
Jan 22 12:02:24.115: INFO: namespace e2e-tests-nsdeletetest-wf9zm deletion completed in 6.249248306s

• [SLOW TEST:19.270 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:02:24.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 22 12:02:24.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:02:34.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-89d74" for this suite.
Jan 22 12:03:28.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:03:28.494: INFO: namespace: e2e-tests-pods-89d74, resource: bindings, ignored listing per whitelist
Jan 22 12:03:28.605: INFO: namespace e2e-tests-pods-89d74 deletion completed in 54.231631923s

• [SLOW TEST:64.489 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:03:28.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-33af8134-3d0f-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 22 12:03:28.951: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-33b14a6d-3d0f-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-zn9dl" to be "success or failure"
Jan 22 12:03:28.961: INFO: Pod "pod-projected-secrets-33b14a6d-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.974149ms
Jan 22 12:03:30.981: INFO: Pod "pod-projected-secrets-33b14a6d-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030461615s
Jan 22 12:03:33.014: INFO: Pod "pod-projected-secrets-33b14a6d-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063221265s
Jan 22 12:03:35.031: INFO: Pod "pod-projected-secrets-33b14a6d-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080015165s
Jan 22 12:03:37.094: INFO: Pod "pod-projected-secrets-33b14a6d-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.143483139s
Jan 22 12:03:39.171: INFO: Pod "pod-projected-secrets-33b14a6d-3d0f-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.219590285s
STEP: Saw pod success
Jan 22 12:03:39.171: INFO: Pod "pod-projected-secrets-33b14a6d-3d0f-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:03:39.181: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-33b14a6d-3d0f-11ea-ad91-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 22 12:03:39.369: INFO: Waiting for pod pod-projected-secrets-33b14a6d-3d0f-11ea-ad91-0242ac110005 to disappear
Jan 22 12:03:39.375: INFO: Pod pod-projected-secrets-33b14a6d-3d0f-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:03:39.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zn9dl" for this suite.
Jan 22 12:03:45.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:03:45.696: INFO: namespace: e2e-tests-projected-zn9dl, resource: bindings, ignored listing per whitelist
Jan 22 12:03:45.720: INFO: namespace e2e-tests-projected-zn9dl deletion completed in 6.337662625s

• [SLOW TEST:17.114 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:03:45.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-3dde8cb5-3d0f-11ea-ad91-0242ac110005
STEP: Creating secret with name s-test-opt-upd-3dde9124-3d0f-11ea-ad91-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3dde8cb5-3d0f-11ea-ad91-0242ac110005
STEP: Updating secret s-test-opt-upd-3dde9124-3d0f-11ea-ad91-0242ac110005
STEP: Creating secret with name s-test-opt-create-3dde9286-3d0f-11ea-ad91-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:04:02.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8g9hg" for this suite.
Jan 22 12:04:26.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:04:26.759: INFO: namespace: e2e-tests-projected-8g9hg, resource: bindings, ignored listing per whitelist
Jan 22 12:04:26.856: INFO: namespace e2e-tests-projected-8g9hg deletion completed in 24.233433921s

• [SLOW TEST:41.136 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:04:26.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-565a6a2f-3d0f-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 22 12:04:27.139: INFO: Waiting up to 5m0s for pod "pod-configmaps-5661703c-3d0f-11ea-ad91-0242ac110005" in namespace "e2e-tests-configmap-cl8cs" to be "success or failure"
Jan 22 12:04:27.151: INFO: Pod "pod-configmaps-5661703c-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.658023ms
Jan 22 12:04:29.549: INFO: Pod "pod-configmaps-5661703c-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.410787838s
Jan 22 12:04:31.573: INFO: Pod "pod-configmaps-5661703c-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434610315s
Jan 22 12:04:33.738: INFO: Pod "pod-configmaps-5661703c-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.5996734s
Jan 22 12:04:35.754: INFO: Pod "pod-configmaps-5661703c-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.615446358s
Jan 22 12:04:37.765: INFO: Pod "pod-configmaps-5661703c-3d0f-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.626720178s
STEP: Saw pod success
Jan 22 12:04:37.765: INFO: Pod "pod-configmaps-5661703c-3d0f-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:04:37.770: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5661703c-3d0f-11ea-ad91-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 22 12:04:38.373: INFO: Waiting for pod pod-configmaps-5661703c-3d0f-11ea-ad91-0242ac110005 to disappear
Jan 22 12:04:38.796: INFO: Pod pod-configmaps-5661703c-3d0f-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:04:38.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cl8cs" for this suite.
Jan 22 12:04:44.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:04:45.140: INFO: namespace: e2e-tests-configmap-cl8cs, resource: bindings, ignored listing per whitelist
Jan 22 12:04:45.159: INFO: namespace e2e-tests-configmap-cl8cs deletion completed in 6.352100086s

• [SLOW TEST:18.301 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:04:45.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0122 12:05:16.039083       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 22 12:05:16.039: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:05:16.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-j2jl4" for this suite.
Jan 22 12:05:24.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:05:24.321: INFO: namespace: e2e-tests-gc-j2jl4, resource: bindings, ignored listing per whitelist
Jan 22 12:05:24.372: INFO: namespace e2e-tests-gc-j2jl4 deletion completed in 8.326827606s

• [SLOW TEST:39.214 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:05:24.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-78b71992-3d0f-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 22 12:05:24.799: INFO: Waiting up to 5m0s for pod "pod-secrets-78bc5608-3d0f-11ea-ad91-0242ac110005" in namespace "e2e-tests-secrets-bv9cw" to be "success or failure"
Jan 22 12:05:24.911: INFO: Pod "pod-secrets-78bc5608-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 111.670398ms
Jan 22 12:05:26.925: INFO: Pod "pod-secrets-78bc5608-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126545042s
Jan 22 12:05:28.942: INFO: Pod "pod-secrets-78bc5608-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143086649s
Jan 22 12:05:31.488: INFO: Pod "pod-secrets-78bc5608-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.689007661s
Jan 22 12:05:33.518: INFO: Pod "pod-secrets-78bc5608-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.718944558s
Jan 22 12:05:35.532: INFO: Pod "pod-secrets-78bc5608-3d0f-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.733204124s
STEP: Saw pod success
Jan 22 12:05:35.532: INFO: Pod "pod-secrets-78bc5608-3d0f-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:05:35.538: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-78bc5608-3d0f-11ea-ad91-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 22 12:05:36.124: INFO: Waiting for pod pod-secrets-78bc5608-3d0f-11ea-ad91-0242ac110005 to disappear
Jan 22 12:05:36.292: INFO: Pod pod-secrets-78bc5608-3d0f-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:05:36.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-bv9cw" for this suite.
Jan 22 12:05:42.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:05:42.761: INFO: namespace: e2e-tests-secrets-bv9cw, resource: bindings, ignored listing per whitelist
Jan 22 12:05:42.761: INFO: namespace e2e-tests-secrets-bv9cw deletion completed in 6.448598821s

• [SLOW TEST:18.386 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:05:42.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-83863623-3d0f-11ea-ad91-0242ac110005
STEP: Creating secret with name s-test-opt-upd-83863698-3d0f-11ea-ad91-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-83863623-3d0f-11ea-ad91-0242ac110005
STEP: Updating secret s-test-opt-upd-83863698-3d0f-11ea-ad91-0242ac110005
STEP: Creating secret with name s-test-opt-create-838636ce-3d0f-11ea-ad91-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:06:01.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-rgg5t" for this suite.
Jan 22 12:06:25.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:06:25.496: INFO: namespace: e2e-tests-secrets-rgg5t, resource: bindings, ignored listing per whitelist
Jan 22 12:06:25.521: INFO: namespace e2e-tests-secrets-rgg5t deletion completed in 24.223738254s

• [SLOW TEST:42.760 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:06:25.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 22 12:06:25.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kxdtq'
Jan 22 12:06:28.104: INFO: stderr: ""
Jan 22 12:06:28.104: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 22 12:06:28.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kxdtq'
Jan 22 12:06:28.302: INFO: stderr: ""
Jan 22 12:06:28.303: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Jan 22 12:06:33.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kxdtq'
Jan 22 12:06:33.447: INFO: stderr: ""
Jan 22 12:06:33.447: INFO: stdout: "update-demo-nautilus-g64wx update-demo-nautilus-tx46m "
Jan 22 12:06:33.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g64wx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kxdtq'
Jan 22 12:06:33.569: INFO: stderr: ""
Jan 22 12:06:33.569: INFO: stdout: ""
Jan 22 12:06:33.569: INFO: update-demo-nautilus-g64wx is created but not running
Jan 22 12:06:38.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kxdtq'
Jan 22 12:06:38.725: INFO: stderr: ""
Jan 22 12:06:38.725: INFO: stdout: "update-demo-nautilus-g64wx update-demo-nautilus-tx46m "
Jan 22 12:06:38.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g64wx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kxdtq'
Jan 22 12:06:38.816: INFO: stderr: ""
Jan 22 12:06:38.816: INFO: stdout: "true"
Jan 22 12:06:38.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g64wx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kxdtq'
Jan 22 12:06:38.979: INFO: stderr: ""
Jan 22 12:06:38.979: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 22 12:06:38.979: INFO: validating pod update-demo-nautilus-g64wx
Jan 22 12:06:39.039: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 22 12:06:39.039: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 22 12:06:39.040: INFO: update-demo-nautilus-g64wx is verified up and running
Jan 22 12:06:39.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tx46m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kxdtq'
Jan 22 12:06:39.208: INFO: stderr: ""
Jan 22 12:06:39.208: INFO: stdout: "true"
Jan 22 12:06:39.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tx46m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kxdtq'
Jan 22 12:06:39.319: INFO: stderr: ""
Jan 22 12:06:39.319: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 22 12:06:39.319: INFO: validating pod update-demo-nautilus-tx46m
Jan 22 12:06:39.328: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 22 12:06:39.328: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 22 12:06:39.328: INFO: update-demo-nautilus-tx46m is verified up and running
STEP: using delete to clean up resources
Jan 22 12:06:39.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kxdtq'
Jan 22 12:06:39.449: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 22 12:06:39.449: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 22 12:06:39.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-kxdtq'
Jan 22 12:06:39.585: INFO: stderr: "No resources found.\n"
Jan 22 12:06:39.585: INFO: stdout: ""
Jan 22 12:06:39.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-kxdtq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 22 12:06:39.696: INFO: stderr: ""
Jan 22 12:06:39.697: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:06:39.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kxdtq" for this suite.
Jan 22 12:07:03.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:07:03.840: INFO: namespace: e2e-tests-kubectl-kxdtq, resource: bindings, ignored listing per whitelist
Jan 22 12:07:03.934: INFO: namespace e2e-tests-kubectl-kxdtq deletion completed in 24.231363643s

• [SLOW TEST:38.413 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:07:03.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-b3fc5a11-3d0f-11ea-ad91-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-b3fc5a11-3d0f-11ea-ad91-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:07:14.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-5nswj" for this suite.
Jan 22 12:07:38.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:07:38.759: INFO: namespace: e2e-tests-configmap-5nswj, resource: bindings, ignored listing per whitelist
Jan 22 12:07:38.793: INFO: namespace e2e-tests-configmap-5nswj deletion completed in 24.195976009s

• [SLOW TEST:34.858 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:07:38.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 22 12:07:49.612: INFO: Successfully updated pod "pod-update-c8be55b6-3d0f-11ea-ad91-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Jan 22 12:07:49.633: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:07:49.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-66jsk" for this suite.
Jan 22 12:08:13.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:08:13.802: INFO: namespace: e2e-tests-pods-66jsk, resource: bindings, ignored listing per whitelist
Jan 22 12:08:13.884: INFO: namespace e2e-tests-pods-66jsk deletion completed in 24.242802391s

• [SLOW TEST:35.090 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:08:13.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-ddb8bddc-3d0f-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 22 12:08:14.235: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ddbc1314-3d0f-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-2w2vk" to be "success or failure"
Jan 22 12:08:14.254: INFO: Pod "pod-projected-secrets-ddbc1314-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.900113ms
Jan 22 12:08:16.283: INFO: Pod "pod-projected-secrets-ddbc1314-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047906688s
Jan 22 12:08:18.308: INFO: Pod "pod-projected-secrets-ddbc1314-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073309788s
Jan 22 12:08:20.321: INFO: Pod "pod-projected-secrets-ddbc1314-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085760205s
Jan 22 12:08:22.507: INFO: Pod "pod-projected-secrets-ddbc1314-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.271936341s
Jan 22 12:08:24.713: INFO: Pod "pod-projected-secrets-ddbc1314-3d0f-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.477890931s
STEP: Saw pod success
Jan 22 12:08:24.713: INFO: Pod "pod-projected-secrets-ddbc1314-3d0f-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:08:24.728: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-ddbc1314-3d0f-11ea-ad91-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 22 12:08:24.918: INFO: Waiting for pod pod-projected-secrets-ddbc1314-3d0f-11ea-ad91-0242ac110005 to disappear
Jan 22 12:08:25.036: INFO: Pod pod-projected-secrets-ddbc1314-3d0f-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:08:25.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2w2vk" for this suite.
Jan 22 12:08:31.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:08:31.213: INFO: namespace: e2e-tests-projected-2w2vk, resource: bindings, ignored listing per whitelist
Jan 22 12:08:31.227: INFO: namespace e2e-tests-projected-2w2vk deletion completed in 6.180646514s

• [SLOW TEST:17.343 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:08:31.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan 22 12:08:31.429: INFO: Waiting up to 5m0s for pod "var-expansion-e7fd6919-3d0f-11ea-ad91-0242ac110005" in namespace "e2e-tests-var-expansion-bqfkw" to be "success or failure"
Jan 22 12:08:31.465: INFO: Pod "var-expansion-e7fd6919-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.715856ms
Jan 22 12:08:33.473: INFO: Pod "var-expansion-e7fd6919-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044002809s
Jan 22 12:08:35.484: INFO: Pod "var-expansion-e7fd6919-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055185317s
Jan 22 12:08:37.648: INFO: Pod "var-expansion-e7fd6919-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.218574517s
Jan 22 12:08:39.663: INFO: Pod "var-expansion-e7fd6919-3d0f-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.234116536s
Jan 22 12:08:41.685: INFO: Pod "var-expansion-e7fd6919-3d0f-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.255520036s
STEP: Saw pod success
Jan 22 12:08:41.685: INFO: Pod "var-expansion-e7fd6919-3d0f-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:08:41.691: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-e7fd6919-3d0f-11ea-ad91-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 22 12:08:41.801: INFO: Waiting for pod var-expansion-e7fd6919-3d0f-11ea-ad91-0242ac110005 to disappear
Jan 22 12:08:41.829: INFO: Pod var-expansion-e7fd6919-3d0f-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:08:41.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-bqfkw" for this suite.
Jan 22 12:08:48.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:08:48.254: INFO: namespace: e2e-tests-var-expansion-bqfkw, resource: bindings, ignored listing per whitelist
Jan 22 12:08:48.342: INFO: namespace e2e-tests-var-expansion-bqfkw deletion completed in 6.44822831s

• [SLOW TEST:17.115 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:08:48.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 22 12:08:56.606: INFO: 10 pods remaining
Jan 22 12:08:56.606: INFO: 10 pods has nil DeletionTimestamp
Jan 22 12:08:56.606: INFO: 
Jan 22 12:08:57.307: INFO: 5 pods remaining
Jan 22 12:08:57.307: INFO: 0 pods has nil DeletionTimestamp
Jan 22 12:08:57.307: INFO: 
STEP: Gathering metrics
W0122 12:08:58.346355       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 22 12:08:58.346: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:08:58.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-7q2sc" for this suite.
Jan 22 12:09:10.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:09:10.737: INFO: namespace: e2e-tests-gc-7q2sc, resource: bindings, ignored listing per whitelist
Jan 22 12:09:10.799: INFO: namespace e2e-tests-gc-7q2sc deletion completed in 12.422811518s

• [SLOW TEST:22.457 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:09:10.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:09:19.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-8k7qd" for this suite.
Jan 22 12:09:25.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:09:25.728: INFO: namespace: e2e-tests-emptydir-wrapper-8k7qd, resource: bindings, ignored listing per whitelist
Jan 22 12:09:25.918: INFO: namespace e2e-tests-emptydir-wrapper-8k7qd deletion completed in 6.505609562s

• [SLOW TEST:15.119 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:09:25.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0122 12:10:06.491307       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 22 12:10:06.491: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:10:06.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-xpb66" for this suite.
Jan 22 12:10:17.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:10:18.176: INFO: namespace: e2e-tests-gc-xpb66, resource: bindings, ignored listing per whitelist
Jan 22 12:10:18.298: INFO: namespace e2e-tests-gc-xpb66 deletion completed in 11.79403553s

• [SLOW TEST:52.380 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:10:18.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 22 12:10:19.096: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-8r8rd,SelfLink:/api/v1/namespaces/e2e-tests-watch-8r8rd/configmaps/e2e-watch-test-watch-closed,UID:2822e014-3d10-11ea-a994-fa163e34d433,ResourceVersion:19074780,Generation:0,CreationTimestamp:2020-01-22 12:10:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 22 12:10:19.096: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-8r8rd,SelfLink:/api/v1/namespaces/e2e-tests-watch-8r8rd/configmaps/e2e-watch-test-watch-closed,UID:2822e014-3d10-11ea-a994-fa163e34d433,ResourceVersion:19074781,Generation:0,CreationTimestamp:2020-01-22 12:10:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 22 12:10:19.187: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-8r8rd,SelfLink:/api/v1/namespaces/e2e-tests-watch-8r8rd/configmaps/e2e-watch-test-watch-closed,UID:2822e014-3d10-11ea-a994-fa163e34d433,ResourceVersion:19074782,Generation:0,CreationTimestamp:2020-01-22 12:10:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 22 12:10:19.187: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-8r8rd,SelfLink:/api/v1/namespaces/e2e-tests-watch-8r8rd/configmaps/e2e-watch-test-watch-closed,UID:2822e014-3d10-11ea-a994-fa163e34d433,ResourceVersion:19074783,Generation:0,CreationTimestamp:2020-01-22 12:10:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:10:19.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-8r8rd" for this suite.
Jan 22 12:10:27.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:10:27.595: INFO: namespace: e2e-tests-watch-8r8rd, resource: bindings, ignored listing per whitelist
Jan 22 12:10:27.662: INFO: namespace e2e-tests-watch-8r8rd deletion completed in 8.466055174s

• [SLOW TEST:9.364 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:10:27.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-2df47c64-3d10-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 22 12:10:29.270: INFO: Waiting up to 5m0s for pod "pod-secrets-2e3a0797-3d10-11ea-ad91-0242ac110005" in namespace "e2e-tests-secrets-k94kp" to be "success or failure"
Jan 22 12:10:29.324: INFO: Pod "pod-secrets-2e3a0797-3d10-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 53.842244ms
Jan 22 12:10:31.339: INFO: Pod "pod-secrets-2e3a0797-3d10-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069091678s
Jan 22 12:10:33.352: INFO: Pod "pod-secrets-2e3a0797-3d10-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081878005s
Jan 22 12:10:35.365: INFO: Pod "pod-secrets-2e3a0797-3d10-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095609601s
Jan 22 12:10:37.379: INFO: Pod "pod-secrets-2e3a0797-3d10-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109232155s
Jan 22 12:10:39.398: INFO: Pod "pod-secrets-2e3a0797-3d10-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.128103586s
STEP: Saw pod success
Jan 22 12:10:39.398: INFO: Pod "pod-secrets-2e3a0797-3d10-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:10:39.403: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-2e3a0797-3d10-11ea-ad91-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 22 12:10:39.509: INFO: Waiting for pod pod-secrets-2e3a0797-3d10-11ea-ad91-0242ac110005 to disappear
Jan 22 12:10:39.551: INFO: Pod pod-secrets-2e3a0797-3d10-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:10:39.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-k94kp" for this suite.
Jan 22 12:10:45.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:10:45.812: INFO: namespace: e2e-tests-secrets-k94kp, resource: bindings, ignored listing per whitelist
Jan 22 12:10:45.825: INFO: namespace e2e-tests-secrets-k94kp deletion completed in 6.245759671s

• [SLOW TEST:18.162 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:10:45.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 22 12:10:46.004: INFO: PodSpec: initContainers in spec.initContainers
Jan 22 12:11:55.318: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3835de4d-3d10-11ea-ad91-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-vd757", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-vd757/pods/pod-init-3835de4d-3d10-11ea-ad91-0242ac110005", UID:"3836d364-3d10-11ea-a994-fa163e34d433", ResourceVersion:"19074958", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715291846, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"4700062"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5qd64", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002695440), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5qd64", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5qd64", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5qd64", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0026201b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002848000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002620230)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002620250)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002620258), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00262025c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291846, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291846, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291846, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715291846, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00275c040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002592070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025920e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://0678470791a2207f7a7468814a834fc5fe11f34074a8022b765b584ba4a3763e"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00275c080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00275c060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:11:55.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-vd757" for this suite.
Jan 22 12:12:19.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:12:19.744: INFO: namespace: e2e-tests-init-container-vd757, resource: bindings, ignored listing per whitelist
Jan 22 12:12:19.780: INFO: namespace e2e-tests-init-container-vd757 deletion completed in 24.409280403s

• [SLOW TEST:93.955 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:12:19.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-8mztt
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-8mztt
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-8mztt
Jan 22 12:12:20.172: INFO: Found 0 stateful pods, waiting for 1
Jan 22 12:12:30.186: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 22 12:12:30.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 22 12:12:31.051: INFO: stderr: "I0122 12:12:30.471390    3160 log.go:172] (0xc0001386e0) (0xc0005ef400) Create stream\nI0122 12:12:30.472000    3160 log.go:172] (0xc0001386e0) (0xc0005ef400) Stream added, broadcasting: 1\nI0122 12:12:30.490142    3160 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0122 12:12:30.490376    3160 log.go:172] (0xc0001386e0) (0xc000522000) Create stream\nI0122 12:12:30.490451    3160 log.go:172] (0xc0001386e0) (0xc000522000) Stream added, broadcasting: 3\nI0122 12:12:30.497854    3160 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0122 12:12:30.497945    3160 log.go:172] (0xc0001386e0) (0xc0003f4000) Create stream\nI0122 12:12:30.497970    3160 log.go:172] (0xc0001386e0) (0xc0003f4000) Stream added, broadcasting: 5\nI0122 12:12:30.504047    3160 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0122 12:12:30.902898    3160 log.go:172] (0xc0001386e0) Data frame received for 3\nI0122 12:12:30.903019    3160 log.go:172] (0xc000522000) (3) Data frame handling\nI0122 12:12:30.903035    3160 log.go:172] (0xc000522000) (3) Data frame sent\nI0122 12:12:31.042035    3160 log.go:172] (0xc0001386e0) Data frame received for 1\nI0122 12:12:31.042414    3160 log.go:172] (0xc0005ef400) (1) Data frame handling\nI0122 12:12:31.042451    3160 log.go:172] (0xc0005ef400) (1) Data frame sent\nI0122 12:12:31.042478    3160 log.go:172] (0xc0001386e0) (0xc0005ef400) Stream removed, broadcasting: 1\nI0122 12:12:31.042938    3160 log.go:172] (0xc0001386e0) (0xc0003f4000) Stream removed, broadcasting: 5\nI0122 12:12:31.042986    3160 log.go:172] (0xc0001386e0) (0xc000522000) Stream removed, broadcasting: 3\nI0122 12:12:31.043062    3160 log.go:172] (0xc0001386e0) Go away received\nI0122 12:12:31.043256    3160 log.go:172] (0xc0001386e0) (0xc0005ef400) Stream removed, broadcasting: 1\nI0122 12:12:31.043277    3160 log.go:172] (0xc0001386e0) (0xc000522000) Stream removed, broadcasting: 3\nI0122 12:12:31.043283    3160 log.go:172] (0xc0001386e0) (0xc0003f4000) Stream removed, broadcasting: 5\n"
Jan 22 12:12:31.051: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 22 12:12:31.051: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 22 12:12:31.066: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 22 12:12:41.082: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 22 12:12:41.082: INFO: Waiting for statefulset status.replicas updated to 0
Jan 22 12:12:41.129: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999605s
Jan 22 12:12:42.177: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.975996742s
Jan 22 12:12:43.207: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.9279467s
Jan 22 12:12:44.214: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.898668592s
Jan 22 12:12:45.237: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.891049839s
Jan 22 12:12:46.258: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.868132489s
Jan 22 12:12:47.278: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.847679396s
Jan 22 12:12:48.297: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.82769421s
Jan 22 12:12:49.310: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.808192858s
Jan 22 12:12:50.327: INFO: Verifying statefulset ss doesn't scale past 1 for another 795.876692ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-8mztt
Jan 22 12:12:51.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:12:51.947: INFO: stderr: "I0122 12:12:51.585625    3183 log.go:172] (0xc00014c6e0) (0xc000718640) Create stream\nI0122 12:12:51.585755    3183 log.go:172] (0xc00014c6e0) (0xc000718640) Stream added, broadcasting: 1\nI0122 12:12:51.591468    3183 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0122 12:12:51.591506    3183 log.go:172] (0xc00014c6e0) (0xc00065cd20) Create stream\nI0122 12:12:51.591516    3183 log.go:172] (0xc00014c6e0) (0xc00065cd20) Stream added, broadcasting: 3\nI0122 12:12:51.592413    3183 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0122 12:12:51.592449    3183 log.go:172] (0xc00014c6e0) (0xc000696000) Create stream\nI0122 12:12:51.592463    3183 log.go:172] (0xc00014c6e0) (0xc000696000) Stream added, broadcasting: 5\nI0122 12:12:51.594631    3183 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0122 12:12:51.720831    3183 log.go:172] (0xc00014c6e0) Data frame received for 3\nI0122 12:12:51.721037    3183 log.go:172] (0xc00065cd20) (3) Data frame handling\nI0122 12:12:51.721070    3183 log.go:172] (0xc00065cd20) (3) Data frame sent\nI0122 12:12:51.937002    3183 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0122 12:12:51.937201    3183 log.go:172] (0xc00014c6e0) (0xc00065cd20) Stream removed, broadcasting: 3\nI0122 12:12:51.937259    3183 log.go:172] (0xc000718640) (1) Data frame handling\nI0122 12:12:51.937297    3183 log.go:172] (0xc000718640) (1) Data frame sent\nI0122 12:12:51.937342    3183 log.go:172] (0xc00014c6e0) (0xc000696000) Stream removed, broadcasting: 5\nI0122 12:12:51.937379    3183 log.go:172] (0xc00014c6e0) (0xc000718640) Stream removed, broadcasting: 1\nI0122 12:12:51.937396    3183 log.go:172] (0xc00014c6e0) Go away received\nI0122 12:12:51.938910    3183 log.go:172] (0xc00014c6e0) (0xc000718640) Stream removed, broadcasting: 1\nI0122 12:12:51.939108    3183 log.go:172] (0xc00014c6e0) (0xc00065cd20) Stream removed, broadcasting: 3\nI0122 12:12:51.939128    3183 log.go:172] (0xc00014c6e0) (0xc000696000) Stream removed, broadcasting: 5\n"
Jan 22 12:12:51.947: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 22 12:12:51.947: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 22 12:12:51.983: INFO: Found 1 stateful pods, waiting for 3
Jan 22 12:13:02.103: INFO: Found 2 stateful pods, waiting for 3
Jan 22 12:13:11.997: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 22 12:13:11.997: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 22 12:13:11.997: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false
Jan 22 12:13:22.040: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 22 12:13:22.040: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 22 12:13:22.040: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 22 12:13:22.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 22 12:13:22.641: INFO: stderr: "I0122 12:13:22.281045    3204 log.go:172] (0xc000170840) (0xc0007ca640) Create stream\nI0122 12:13:22.281314    3204 log.go:172] (0xc000170840) (0xc0007ca640) Stream added, broadcasting: 1\nI0122 12:13:22.291192    3204 log.go:172] (0xc000170840) Reply frame received for 1\nI0122 12:13:22.291237    3204 log.go:172] (0xc000170840) (0xc0005d2be0) Create stream\nI0122 12:13:22.291243    3204 log.go:172] (0xc000170840) (0xc0005d2be0) Stream added, broadcasting: 3\nI0122 12:13:22.293066    3204 log.go:172] (0xc000170840) Reply frame received for 3\nI0122 12:13:22.293224    3204 log.go:172] (0xc000170840) (0xc00079a000) Create stream\nI0122 12:13:22.293256    3204 log.go:172] (0xc000170840) (0xc00079a000) Stream added, broadcasting: 5\nI0122 12:13:22.298135    3204 log.go:172] (0xc000170840) Reply frame received for 5\nI0122 12:13:22.431638    3204 log.go:172] (0xc000170840) Data frame received for 3\nI0122 12:13:22.431800    3204 log.go:172] (0xc0005d2be0) (3) Data frame handling\nI0122 12:13:22.431842    3204 log.go:172] (0xc0005d2be0) (3) Data frame sent\nI0122 12:13:22.621939    3204 log.go:172] (0xc000170840) Data frame received for 1\nI0122 12:13:22.622086    3204 log.go:172] (0xc000170840) (0xc00079a000) Stream removed, broadcasting: 5\nI0122 12:13:22.622220    3204 log.go:172] (0xc000170840) (0xc0005d2be0) Stream removed, broadcasting: 3\nI0122 12:13:22.622378    3204 log.go:172] (0xc0007ca640) (1) Data frame handling\nI0122 12:13:22.622422    3204 log.go:172] (0xc0007ca640) (1) Data frame sent\nI0122 12:13:22.622449    3204 log.go:172] (0xc000170840) (0xc0007ca640) Stream removed, broadcasting: 1\nI0122 12:13:22.622473    3204 log.go:172] (0xc000170840) Go away received\nI0122 12:13:22.623369    3204 log.go:172] (0xc000170840) (0xc0007ca640) Stream removed, broadcasting: 1\nI0122 12:13:22.623390    3204 log.go:172] (0xc000170840) (0xc0005d2be0) Stream removed, broadcasting: 3\nI0122 12:13:22.623400    3204 log.go:172] (0xc000170840) (0xc00079a000) Stream removed, broadcasting: 5\n"
Jan 22 12:13:22.642: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 22 12:13:22.642: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 22 12:13:22.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 22 12:13:23.411: INFO: stderr: "I0122 12:13:22.968886    3226 log.go:172] (0xc00014c630) (0xc000332780) Create stream\nI0122 12:13:22.969138    3226 log.go:172] (0xc00014c630) (0xc000332780) Stream added, broadcasting: 1\nI0122 12:13:22.979002    3226 log.go:172] (0xc00014c630) Reply frame received for 1\nI0122 12:13:22.979395    3226 log.go:172] (0xc00014c630) (0xc0008fc000) Create stream\nI0122 12:13:22.979524    3226 log.go:172] (0xc00014c630) (0xc0008fc000) Stream added, broadcasting: 3\nI0122 12:13:22.984565    3226 log.go:172] (0xc00014c630) Reply frame received for 3\nI0122 12:13:22.984702    3226 log.go:172] (0xc00014c630) (0xc000174f00) Create stream\nI0122 12:13:22.984724    3226 log.go:172] (0xc00014c630) (0xc000174f00) Stream added, broadcasting: 5\nI0122 12:13:22.985919    3226 log.go:172] (0xc00014c630) Reply frame received for 5\nI0122 12:13:23.149930    3226 log.go:172] (0xc00014c630) Data frame received for 3\nI0122 12:13:23.150118    3226 log.go:172] (0xc0008fc000) (3) Data frame handling\nI0122 12:13:23.150162    3226 log.go:172] (0xc0008fc000) (3) Data frame sent\nI0122 12:13:23.398509    3226 log.go:172] (0xc00014c630) Data frame received for 1\nI0122 12:13:23.398697    3226 log.go:172] (0xc00014c630) (0xc0008fc000) Stream removed, broadcasting: 3\nI0122 12:13:23.398767    3226 log.go:172] (0xc000332780) (1) Data frame handling\nI0122 12:13:23.398798    3226 log.go:172] (0xc000332780) (1) Data frame sent\nI0122 12:13:23.398883    3226 log.go:172] (0xc00014c630) (0xc000174f00) Stream removed, broadcasting: 5\nI0122 12:13:23.398935    3226 log.go:172] (0xc00014c630) (0xc000332780) Stream removed, broadcasting: 1\nI0122 12:13:23.398974    3226 log.go:172] (0xc00014c630) Go away received\nI0122 12:13:23.399587    3226 log.go:172] (0xc00014c630) (0xc000332780) Stream removed, broadcasting: 1\nI0122 12:13:23.399602    3226 log.go:172] (0xc00014c630) (0xc0008fc000) Stream removed, broadcasting: 3\nI0122 12:13:23.399611    3226 log.go:172] (0xc00014c630) (0xc000174f00) Stream removed, broadcasting: 5\n"
Jan 22 12:13:23.411: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 22 12:13:23.411: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 22 12:13:23.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 22 12:13:24.150: INFO: stderr: "I0122 12:13:23.592920    3248 log.go:172] (0xc0006ce370) (0xc0006fa640) Create stream\nI0122 12:13:23.593233    3248 log.go:172] (0xc0006ce370) (0xc0006fa640) Stream added, broadcasting: 1\nI0122 12:13:23.599524    3248 log.go:172] (0xc0006ce370) Reply frame received for 1\nI0122 12:13:23.599593    3248 log.go:172] (0xc0006ce370) (0xc00062ed20) Create stream\nI0122 12:13:23.599621    3248 log.go:172] (0xc0006ce370) (0xc00062ed20) Stream added, broadcasting: 3\nI0122 12:13:23.600959    3248 log.go:172] (0xc0006ce370) Reply frame received for 3\nI0122 12:13:23.600993    3248 log.go:172] (0xc0006ce370) (0xc000324000) Create stream\nI0122 12:13:23.601001    3248 log.go:172] (0xc0006ce370) (0xc000324000) Stream added, broadcasting: 5\nI0122 12:13:23.602506    3248 log.go:172] (0xc0006ce370) Reply frame received for 5\nI0122 12:13:23.896858    3248 log.go:172] (0xc0006ce370) Data frame received for 3\nI0122 12:13:23.896977    3248 log.go:172] (0xc00062ed20) (3) Data frame handling\nI0122 12:13:23.897015    3248 log.go:172] (0xc00062ed20) (3) Data frame sent\nI0122 12:13:24.138154    3248 log.go:172] (0xc0006ce370) (0xc00062ed20) Stream removed, broadcasting: 3\nI0122 12:13:24.138427    3248 log.go:172] (0xc0006ce370) Data frame received for 1\nI0122 12:13:24.138449    3248 log.go:172] (0xc0006fa640) (1) Data frame handling\nI0122 12:13:24.138473    3248 log.go:172] (0xc0006fa640) (1) Data frame sent\nI0122 12:13:24.138619    3248 log.go:172] (0xc0006ce370) (0xc0006fa640) Stream removed, broadcasting: 1\nI0122 12:13:24.139124    3248 log.go:172] (0xc0006ce370) (0xc000324000) Stream removed, broadcasting: 5\nI0122 12:13:24.139175    3248 log.go:172] (0xc0006ce370) (0xc0006fa640) Stream removed, broadcasting: 1\nI0122 12:13:24.139200    3248 log.go:172] (0xc0006ce370) (0xc00062ed20) Stream removed, broadcasting: 3\nI0122 12:13:24.139210    3248 log.go:172] (0xc0006ce370) (0xc000324000) Stream removed, broadcasting: 5\nI0122 12:13:24.139361    3248 log.go:172] (0xc0006ce370) Go away received\n"
Jan 22 12:13:24.150: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 22 12:13:24.150: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 22 12:13:24.150: INFO: Waiting for statefulset status.replicas updated to 0
Jan 22 12:13:24.175: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 22 12:13:34.195: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 22 12:13:34.196: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 22 12:13:34.196: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 22 12:13:34.253: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999675s
Jan 22 12:13:35.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.956677569s
Jan 22 12:13:36.282: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.946328692s
Jan 22 12:13:37.297: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.927230231s
Jan 22 12:13:38.324: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.912591702s
Jan 22 12:13:40.234: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.885400695s
Jan 22 12:13:41.256: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.975342201s
Jan 22 12:13:42.268: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.953309593s
Jan 22 12:13:43.289: INFO: Verifying statefulset ss doesn't scale past 3 for another 941.536755ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-8mztt
Jan 22 12:13:44.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:13:45.441: INFO: stderr: "I0122 12:13:45.163344    3270 log.go:172] (0xc0006fc370) (0xc000720640) Create stream\nI0122 12:13:45.163489    3270 log.go:172] (0xc0006fc370) (0xc000720640) Stream added, broadcasting: 1\nI0122 12:13:45.168494    3270 log.go:172] (0xc0006fc370) Reply frame received for 1\nI0122 12:13:45.168548    3270 log.go:172] (0xc0006fc370) (0xc00055abe0) Create stream\nI0122 12:13:45.168563    3270 log.go:172] (0xc0006fc370) (0xc00055abe0) Stream added, broadcasting: 3\nI0122 12:13:45.169705    3270 log.go:172] (0xc0006fc370) Reply frame received for 3\nI0122 12:13:45.169724    3270 log.go:172] (0xc0006fc370) (0xc0003f0000) Create stream\nI0122 12:13:45.169734    3270 log.go:172] (0xc0006fc370) (0xc0003f0000) Stream added, broadcasting: 5\nI0122 12:13:45.170518    3270 log.go:172] (0xc0006fc370) Reply frame received for 5\nI0122 12:13:45.319727    3270 log.go:172] (0xc0006fc370) Data frame received for 3\nI0122 12:13:45.319798    3270 log.go:172] (0xc00055abe0) (3) Data frame handling\nI0122 12:13:45.319812    3270 log.go:172] (0xc00055abe0) (3) Data frame sent\nI0122 12:13:45.432800    3270 log.go:172] (0xc0006fc370) Data frame received for 1\nI0122 12:13:45.432985    3270 log.go:172] (0xc0006fc370) (0xc00055abe0) Stream removed, broadcasting: 3\nI0122 12:13:45.433117    3270 log.go:172] (0xc0006fc370) (0xc0003f0000) Stream removed, broadcasting: 5\nI0122 12:13:45.433160    3270 log.go:172] (0xc000720640) (1) Data frame handling\nI0122 12:13:45.433202    3270 log.go:172] (0xc000720640) (1) Data frame sent\nI0122 12:13:45.433223    3270 log.go:172] (0xc0006fc370) (0xc000720640) Stream removed, broadcasting: 1\nI0122 12:13:45.433254    3270 log.go:172] (0xc0006fc370) Go away received\nI0122 12:13:45.433681    3270 log.go:172] (0xc0006fc370) (0xc000720640) Stream removed, broadcasting: 1\nI0122 12:13:45.433711    3270 log.go:172] (0xc0006fc370) (0xc00055abe0) Stream removed, broadcasting: 3\nI0122 12:13:45.433726    3270 log.go:172] (0xc0006fc370) (0xc0003f0000) Stream removed, broadcasting: 5\n"
Jan 22 12:13:45.442: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 22 12:13:45.442: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 22 12:13:45.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:13:46.063: INFO: stderr: "I0122 12:13:45.643497    3291 log.go:172] (0xc0006f8370) (0xc000722640) Create stream\nI0122 12:13:45.643677    3291 log.go:172] (0xc0006f8370) (0xc000722640) Stream added, broadcasting: 1\nI0122 12:13:45.648486    3291 log.go:172] (0xc0006f8370) Reply frame received for 1\nI0122 12:13:45.648585    3291 log.go:172] (0xc0006f8370) (0xc000594be0) Create stream\nI0122 12:13:45.648593    3291 log.go:172] (0xc0006f8370) (0xc000594be0) Stream added, broadcasting: 3\nI0122 12:13:45.649567    3291 log.go:172] (0xc0006f8370) Reply frame received for 3\nI0122 12:13:45.649595    3291 log.go:172] (0xc0006f8370) (0xc000148000) Create stream\nI0122 12:13:45.649602    3291 log.go:172] (0xc0006f8370) (0xc000148000) Stream added, broadcasting: 5\nI0122 12:13:45.650765    3291 log.go:172] (0xc0006f8370) Reply frame received for 5\nI0122 12:13:45.865580    3291 log.go:172] (0xc0006f8370) Data frame received for 3\nI0122 12:13:45.865866    3291 log.go:172] (0xc000594be0) (3) Data frame handling\nI0122 12:13:45.865933    3291 log.go:172] (0xc000594be0) (3) Data frame sent\nI0122 12:13:46.053468    3291 log.go:172] (0xc0006f8370) (0xc000594be0) Stream removed, broadcasting: 3\nI0122 12:13:46.053670    3291 log.go:172] (0xc0006f8370) Data frame received for 1\nI0122 12:13:46.053679    3291 log.go:172] (0xc000722640) (1) Data frame handling\nI0122 12:13:46.053702    3291 log.go:172] (0xc000722640) (1) Data frame sent\nI0122 12:13:46.053709    3291 log.go:172] (0xc0006f8370) (0xc000722640) Stream removed, broadcasting: 1\nI0122 12:13:46.054086    3291 log.go:172] (0xc0006f8370) (0xc000148000) Stream removed, broadcasting: 5\nI0122 12:13:46.054168    3291 log.go:172] (0xc0006f8370) Go away received\nI0122 12:13:46.054255    3291 log.go:172] (0xc0006f8370) (0xc000722640) Stream removed, broadcasting: 1\nI0122 12:13:46.054289    3291 log.go:172] (0xc0006f8370) (0xc000594be0) Stream removed, broadcasting: 3\nI0122 12:13:46.054313    3291 log.go:172] (0xc0006f8370) (0xc000148000) Stream removed, broadcasting: 5\n"
Jan 22 12:13:46.064: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 22 12:13:46.064: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 22 12:13:46.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:13:46.475: INFO: rc: 126
Jan 22 12:13:46.475: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown
 I0122 12:13:46.256195    3313 log.go:172] (0xc00082c2c0) (0xc000768780) Create stream
I0122 12:13:46.256402    3313 log.go:172] (0xc00082c2c0) (0xc000768780) Stream added, broadcasting: 1
I0122 12:13:46.260130    3313 log.go:172] (0xc00082c2c0) Reply frame received for 1
I0122 12:13:46.260187    3313 log.go:172] (0xc00082c2c0) (0xc0006b6460) Create stream
I0122 12:13:46.260200    3313 log.go:172] (0xc00082c2c0) (0xc0006b6460) Stream added, broadcasting: 3
I0122 12:13:46.260893    3313 log.go:172] (0xc00082c2c0) Reply frame received for 3
I0122 12:13:46.260916    3313 log.go:172] (0xc00082c2c0) (0xc00065cc80) Create stream
I0122 12:13:46.260924    3313 log.go:172] (0xc00082c2c0) (0xc00065cc80) Stream added, broadcasting: 5
I0122 12:13:46.261809    3313 log.go:172] (0xc00082c2c0) Reply frame received for 5
I0122 12:13:46.455163    3313 log.go:172] (0xc00082c2c0) Data frame received for 3
I0122 12:13:46.455249    3313 log.go:172] (0xc0006b6460) (3) Data frame handling
I0122 12:13:46.455267    3313 log.go:172] (0xc0006b6460) (3) Data frame sent
I0122 12:13:46.466381    3313 log.go:172] (0xc00082c2c0) Data frame received for 1
I0122 12:13:46.466508    3313 log.go:172] (0xc000768780) (1) Data frame handling
I0122 12:13:46.466540    3313 log.go:172] (0xc000768780) (1) Data frame sent
I0122 12:13:46.466621    3313 log.go:172] (0xc00082c2c0) (0xc000768780) Stream removed, broadcasting: 1
I0122 12:13:46.466722    3313 log.go:172] (0xc00082c2c0) (0xc0006b6460) Stream removed, broadcasting: 3
I0122 12:13:46.466809    3313 log.go:172] (0xc00082c2c0) (0xc00065cc80) Stream removed, broadcasting: 5
I0122 12:13:46.467053    3313 log.go:172] (0xc00082c2c0) Go away received
I0122 12:13:46.467349    3313 log.go:172] (0xc00082c2c0) (0xc000768780) Stream removed, broadcasting: 1
I0122 12:13:46.467366    3313 log.go:172] (0xc00082c2c0) (0xc0006b6460) Stream removed, broadcasting: 3
I0122 12:13:46.467384    3313 log.go:172] (0xc00082c2c0) (0xc00065cc80) Stream removed, broadcasting: 5
command terminated with exit code 126
 []  0xc0019e2f90 exit status 126   true [0xc00000f348 0xc00000f3d8 0xc00000f488] [0xc00000f348 0xc00000f3d8 0xc00000f488] [0xc00000f3b8 0xc00000f458] [0x935700 0x935700] 0xc001a48f00 }:
Command stdout:
OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown

stderr:
I0122 12:13:46.256195    3313 log.go:172] (0xc00082c2c0) (0xc000768780) Create stream
I0122 12:13:46.256402    3313 log.go:172] (0xc00082c2c0) (0xc000768780) Stream added, broadcasting: 1
I0122 12:13:46.260130    3313 log.go:172] (0xc00082c2c0) Reply frame received for 1
I0122 12:13:46.260187    3313 log.go:172] (0xc00082c2c0) (0xc0006b6460) Create stream
I0122 12:13:46.260200    3313 log.go:172] (0xc00082c2c0) (0xc0006b6460) Stream added, broadcasting: 3
I0122 12:13:46.260893    3313 log.go:172] (0xc00082c2c0) Reply frame received for 3
I0122 12:13:46.260916    3313 log.go:172] (0xc00082c2c0) (0xc00065cc80) Create stream
I0122 12:13:46.260924    3313 log.go:172] (0xc00082c2c0) (0xc00065cc80) Stream added, broadcasting: 5
I0122 12:13:46.261809    3313 log.go:172] (0xc00082c2c0) Reply frame received for 5
I0122 12:13:46.455163    3313 log.go:172] (0xc00082c2c0) Data frame received for 3
I0122 12:13:46.455249    3313 log.go:172] (0xc0006b6460) (3) Data frame handling
I0122 12:13:46.455267    3313 log.go:172] (0xc0006b6460) (3) Data frame sent
I0122 12:13:46.466381    3313 log.go:172] (0xc00082c2c0) Data frame received for 1
I0122 12:13:46.466508    3313 log.go:172] (0xc000768780) (1) Data frame handling
I0122 12:13:46.466540    3313 log.go:172] (0xc000768780) (1) Data frame sent
I0122 12:13:46.466621    3313 log.go:172] (0xc00082c2c0) (0xc000768780) Stream removed, broadcasting: 1
I0122 12:13:46.466722    3313 log.go:172] (0xc00082c2c0) (0xc0006b6460) Stream removed, broadcasting: 3
I0122 12:13:46.466809    3313 log.go:172] (0xc00082c2c0) (0xc00065cc80) Stream removed, broadcasting: 5
I0122 12:13:46.467053    3313 log.go:172] (0xc00082c2c0) Go away received
I0122 12:13:46.467349    3313 log.go:172] (0xc00082c2c0) (0xc000768780) Stream removed, broadcasting: 1
I0122 12:13:46.467366    3313 log.go:172] (0xc00082c2c0) (0xc0006b6460) Stream removed, broadcasting: 3
I0122 12:13:46.467384    3313 log.go:172] (0xc00082c2c0) (0xc00065cc80) Stream removed, broadcasting: 5
command terminated with exit code 126

error:
exit status 126

Jan 22 12:13:56.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:13:56.699: INFO: rc: 1
Jan 22 12:13:56.700: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0019e3110 exit status 1   true [0xc00000f4b0 0xc00000f4f0 0xc00000f528] [0xc00000f4b0 0xc00000f4f0 0xc00000f528] [0xc00000f4e8 0xc00000f508] [0x935700 0x935700] 0xc001a49620 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan 22 12:14:06.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:14:06.830: INFO: rc: 1
Jan 22 12:14:06.830: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000d175f0 exit status 1   true [0xc001a9e0e8 0xc001a9e100 0xc001a9e118] [0xc001a9e0e8 0xc001a9e100 0xc001a9e118] [0xc001a9e0f8 0xc001a9e110] [0x935700 0x935700] 0xc001c1d740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:14:16.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:14:16.993: INFO: rc: 1
Jan 22 12:14:16.993: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000d17740 exit status 1   true [0xc001a9e120 0xc001a9e138 0xc001a9e150] [0xc001a9e120 0xc001a9e138 0xc001a9e150] [0xc001a9e130 0xc001a9e148] [0x935700 0x935700] 0xc001c1d9e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:14:26.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:14:27.151: INFO: rc: 1
Jan 22 12:14:27.152: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000200000 exit status 1   true [0xc000462218 0xc000462278 0xc000462290] [0xc000462218 0xc000462278 0xc000462290] [0xc000462268 0xc000462288] [0x935700 0x935700] 0xc001a62cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:14:37.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:14:37.348: INFO: rc: 1
Jan 22 12:14:37.349: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000d17890 exit status 1   true [0xc001a9e158 0xc001a9e170 0xc001a9e188] [0xc001a9e158 0xc001a9e170 0xc001a9e188] [0xc001a9e168 0xc001a9e180] [0x935700 0x935700] 0xc001c1dc80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:14:47.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:14:47.505: INFO: rc: 1
Jan 22 12:14:47.506: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0019e3230 exit status 1   true [0xc00000f538 0xc00000f550 0xc00000f5d0] [0xc00000f538 0xc00000f550 0xc00000f5d0] [0xc00000f548 0xc00000f588] [0x935700 0x935700] 0xc001a49c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:14:57.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:14:57.690: INFO: rc: 1
Jan 22 12:14:57.690: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00220c180 exit status 1   true [0xc0014e0000 0xc0014e0018 0xc0014e0030] [0xc0014e0000 0xc0014e0018 0xc0014e0030] [0xc0014e0010 0xc0014e0028] [0x935700 0x935700] 0xc0019de780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:15:07.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:15:07.845: INFO: rc: 1
Jan 22 12:15:07.845: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000200180 exit status 1   true [0xc0004622a8 0xc0004622f0 0xc000462308] [0xc0004622a8 0xc0004622f0 0xc000462308] [0xc0004622e8 0xc000462300] [0x935700 0x935700] 0xc001a63ec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:15:17.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:15:17.984: INFO: rc: 1
Jan 22 12:15:17.984: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00220c300 exit status 1   true [0xc0014e0038 0xc0014e0050 0xc0014e0068] [0xc0014e0038 0xc0014e0050 0xc0014e0068] [0xc0014e0048 0xc0014e0060] [0x935700 0x935700] 0xc0019df200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:15:27.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:15:28.114: INFO: rc: 1
Jan 22 12:15:28.114: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00220c450 exit status 1   true [0xc0014e0078 0xc0014e0090 0xc0014e00a8] [0xc0014e0078 0xc0014e0090 0xc0014e00a8] [0xc0014e0088 0xc0014e00a0] [0x935700 0x935700] 0xc0019dfe00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:15:38.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:15:38.265: INFO: rc: 1
Jan 22 12:15:38.265: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001922120 exit status 1   true [0xc0000ee0f0 0xc000462020 0xc000462110] [0xc0000ee0f0 0xc000462020 0xc000462110] [0xc0000ee238 0xc000462100] [0x935700 0x935700] 0xc0019de780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:15:48.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:15:48.462: INFO: rc: 1
Jan 22 12:15:48.462: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001922270 exit status 1   true [0xc000462128 0xc000462168 0xc000462218] [0xc000462128 0xc000462168 0xc000462218] [0xc000462150 0xc000462208] [0x935700 0x935700] 0xc0019df200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:15:58.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:15:58.615: INFO: rc: 1
Jan 22 12:15:58.615: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001922480 exit status 1   true [0xc000462260 0xc000462280 0xc0004622a8] [0xc000462260 0xc000462280 0xc0004622a8] [0xc000462278 0xc000462290] [0x935700 0x935700] 0xc001a624e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:16:08.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:16:08.757: INFO: rc: 1
Jan 22 12:16:08.757: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001922960 exit status 1   true [0xc0004622c8 0xc0004622f8 0xc000462320] [0xc0004622c8 0xc0004622f8 0xc000462320] [0xc0004622f0 0xc000462308] [0x935700 0x935700] 0xc001a628a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:16:18.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:16:18.910: INFO: rc: 1
Jan 22 12:16:18.911: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00154a150 exit status 1   true [0xc0014e0000 0xc0014e0018 0xc0014e0030] [0xc0014e0000 0xc0014e0018 0xc0014e0030] [0xc0014e0010 0xc0014e0028] [0x935700 0x935700] 0xc001b4a420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:16:28.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:16:29.063: INFO: rc: 1
Jan 22 12:16:29.063: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0002001e0 exit status 1   true [0xc001a9e000 0xc001a9e018 0xc001a9e030] [0xc001a9e000 0xc001a9e018 0xc001a9e030] [0xc001a9e010 0xc001a9e028] [0x935700 0x935700] 0xc001982e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:16:39.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:16:39.211: INFO: rc: 1
Jan 22 12:16:39.211: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000200330 exit status 1   true [0xc001a9e038 0xc001a9e050 0xc001a9e068] [0xc001a9e038 0xc001a9e050 0xc001a9e068] [0xc001a9e048 0xc001a9e060] [0x935700 0x935700] 0xc001983c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:16:49.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:16:49.380: INFO: rc: 1
Jan 22 12:16:49.380: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000200450 exit status 1   true [0xc001a9e070 0xc001a9e088 0xc001a9e0a0] [0xc001a9e070 0xc001a9e088 0xc001a9e0a0] [0xc001a9e080 0xc001a9e098] [0x935700 0x935700] 0xc001c1c300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:16:59.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:16:59.567: INFO: rc: 1
Jan 22 12:16:59.568: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001922ae0 exit status 1   true [0xc000462328 0xc000462368 0xc0004623a0] [0xc000462328 0xc000462368 0xc0004623a0] [0xc000462358 0xc000462398] [0x935700 0x935700] 0xc001a62cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:17:09.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:17:09.697: INFO: rc: 1
Jan 22 12:17:09.697: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000200630 exit status 1   true [0xc001a9e0a8 0xc001a9e0c0 0xc001a9e0d8] [0xc001a9e0a8 0xc001a9e0c0 0xc001a9e0d8] [0xc001a9e0b8 0xc001a9e0d0] [0x935700 0x935700] 0xc001c1c780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:17:19.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:17:19.896: INFO: rc: 1
Jan 22 12:17:19.897: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001922c60 exit status 1   true [0xc0004623b0 0xc0004623f8 0xc000462430] [0xc0004623b0 0xc0004623f8 0xc000462430] [0xc0004623e0 0xc000462428] [0x935700 0x935700] 0xc001a63ec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:17:29.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:17:30.068: INFO: rc: 1
Jan 22 12:17:30.068: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00220c0f0 exit status 1   true [0xc00000e010 0xc00000ec50 0xc00000ecd8] [0xc00000e010 0xc00000ec50 0xc00000ecd8] [0xc00000ec20 0xc00000ecb0] [0x935700 0x935700] 0xc001a48660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:17:40.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:17:40.234: INFO: rc: 1
Jan 22 12:17:40.234: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001922150 exit status 1   true [0xc0014e0000 0xc0014e0018 0xc0014e0030] [0xc0014e0000 0xc0014e0018 0xc0014e0030] [0xc0014e0010 0xc0014e0028] [0x935700 0x935700] 0xc001a62120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:17:50.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:17:50.391: INFO: rc: 1
Jan 22 12:17:50.391: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000200180 exit status 1   true [0xc000462020 0xc000462110 0xc000462150] [0xc000462020 0xc000462110 0xc000462150] [0xc000462100 0xc000462130] [0x935700 0x935700] 0xc0019de780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:18:00.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:18:00.606: INFO: rc: 1
Jan 22 12:18:00.607: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000200300 exit status 1   true [0xc000462168 0xc000462218 0xc000462278] [0xc000462168 0xc000462218 0xc000462278] [0xc000462208 0xc000462268] [0x935700 0x935700] 0xc0019df200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:18:10.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:18:10.811: INFO: rc: 1
Jan 22 12:18:10.811: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0019222a0 exit status 1   true [0xc0014e0038 0xc0014e0050 0xc0014e0068] [0xc0014e0038 0xc0014e0050 0xc0014e0068] [0xc0014e0048 0xc0014e0060] [0x935700 0x935700] 0xc001a626c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:18:20.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:18:21.006: INFO: rc: 1
Jan 22 12:18:21.006: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00154a1e0 exit status 1   true [0xc0000ee1e8 0xc00000ed48 0xc00000ed68] [0xc0000ee1e8 0xc00000ed48 0xc00000ed68] [0xc00000ed28 0xc00000ed60] [0x935700 0x935700] 0xc0019834a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:18:31.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:18:31.151: INFO: rc: 1
Jan 22 12:18:31.151: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0019224b0 exit status 1   true [0xc0014e0070 0xc0014e0088 0xc0014e00a0] [0xc0014e0070 0xc0014e0088 0xc0014e00a0] [0xc0014e0080 0xc0014e0098] [0x935700 0x935700] 0xc001a62a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:18:41.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:18:41.339: INFO: rc: 1
Jan 22 12:18:41.340: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00154a330 exit status 1   true [0xc00000edc0 0xc00000edf8 0xc00000eeb8] [0xc00000edc0 0xc00000edf8 0xc00000eeb8] [0xc00000ede8 0xc00000ee58] [0x935700 0x935700] 0xc001b4a0c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 22 12:18:51.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8mztt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 22 12:18:51.523: INFO: rc: 1
Jan 22 12:18:51.524: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Jan 22 12:18:51.524: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 22 12:18:51.549: INFO: Deleting all statefulset in ns e2e-tests-statefulset-8mztt
Jan 22 12:18:51.555: INFO: Scaling statefulset ss to 0
Jan 22 12:18:51.569: INFO: Waiting for statefulset status.replicas updated to 0
Jan 22 12:18:51.573: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:18:51.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-8mztt" for this suite.
Jan 22 12:18:59.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:18:59.865: INFO: namespace: e2e-tests-statefulset-8mztt, resource: bindings, ignored listing per whitelist
Jan 22 12:18:59.989: INFO: namespace e2e-tests-statefulset-8mztt deletion completed in 8.259171009s

• [SLOW TEST:400.209 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:18:59.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 22 12:19:00.258: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 22 12:19:05.269: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:19:05.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-2j9sx" for this suite.
Jan 22 12:19:14.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:19:14.152: INFO: namespace: e2e-tests-replication-controller-2j9sx, resource: bindings, ignored listing per whitelist
Jan 22 12:19:14.309: INFO: namespace e2e-tests-replication-controller-2j9sx deletion completed in 8.473785014s

• [SLOW TEST:14.319 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:19:14.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0122 12:19:30.617023       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 22 12:19:30.617: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:19:30.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-hwqz6" for this suite.
Jan 22 12:19:52.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:19:52.761: INFO: namespace: e2e-tests-gc-hwqz6, resource: bindings, ignored listing per whitelist
Jan 22 12:19:52.803: INFO: namespace e2e-tests-gc-hwqz6 deletion completed in 22.176373975s

• [SLOW TEST:38.494 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:19:52.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 22 12:19:53.056: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 24.997992ms)
Jan 22 12:19:53.074: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.546963ms)
Jan 22 12:19:53.082: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.29677ms)
Jan 22 12:19:53.087: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.86809ms)
Jan 22 12:19:53.093: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.43196ms)
Jan 22 12:19:53.100: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.819215ms)
Jan 22 12:19:53.105: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.7138ms)
Jan 22 12:19:53.115: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.916905ms)
Jan 22 12:19:53.212: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 96.858875ms)
Jan 22 12:19:53.224: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.919354ms)
Jan 22 12:19:53.230: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.006117ms)
Jan 22 12:19:53.236: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.490032ms)
Jan 22 12:19:53.243: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.90898ms)
Jan 22 12:19:53.248: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.665787ms)
Jan 22 12:19:53.254: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.290614ms)
Jan 22 12:19:53.259: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.582883ms)
Jan 22 12:19:53.265: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.694293ms)
Jan 22 12:19:53.272: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.312259ms)
Jan 22 12:19:53.277: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.937573ms)
Jan 22 12:19:53.283: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.216142ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:19:53.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-46lff" for this suite.
Jan 22 12:19:59.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:19:59.524: INFO: namespace: e2e-tests-proxy-46lff, resource: bindings, ignored listing per whitelist
Jan 22 12:19:59.567: INFO: namespace e2e-tests-proxy-46lff deletion completed in 6.278162746s

• [SLOW TEST:6.763 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:19:59.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 22 12:19:59.714: INFO: Waiting up to 5m0s for pod "pod-823dd3ec-3d11-11ea-ad91-0242ac110005" in namespace "e2e-tests-emptydir-sqmh4" to be "success or failure"
Jan 22 12:19:59.738: INFO: Pod "pod-823dd3ec-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.534511ms
Jan 22 12:20:01.755: INFO: Pod "pod-823dd3ec-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041306801s
Jan 22 12:20:03.785: INFO: Pod "pod-823dd3ec-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071728995s
Jan 22 12:20:05.996: INFO: Pod "pod-823dd3ec-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.281891809s
Jan 22 12:20:08.117: INFO: Pod "pod-823dd3ec-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.402943298s
Jan 22 12:20:10.416: INFO: Pod "pod-823dd3ec-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.701992787s
Jan 22 12:20:12.802: INFO: Pod "pod-823dd3ec-3d11-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.088492886s
STEP: Saw pod success
Jan 22 12:20:12.802: INFO: Pod "pod-823dd3ec-3d11-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:20:12.816: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-823dd3ec-3d11-11ea-ad91-0242ac110005 container test-container: 
STEP: delete the pod
Jan 22 12:20:13.222: INFO: Waiting for pod pod-823dd3ec-3d11-11ea-ad91-0242ac110005 to disappear
Jan 22 12:20:13.258: INFO: Pod pod-823dd3ec-3d11-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:20:13.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sqmh4" for this suite.
Jan 22 12:20:19.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:20:19.561: INFO: namespace: e2e-tests-emptydir-sqmh4, resource: bindings, ignored listing per whitelist
Jan 22 12:20:19.612: INFO: namespace e2e-tests-emptydir-sqmh4 deletion completed in 6.337269772s

• [SLOW TEST:20.044 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:20:19.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan 22 12:20:19.823: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix657975206/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:20:19.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-k4stb" for this suite.
Jan 22 12:20:25.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:20:26.115: INFO: namespace: e2e-tests-kubectl-k4stb, resource: bindings, ignored listing per whitelist
Jan 22 12:20:26.125: INFO: namespace e2e-tests-kubectl-k4stb deletion completed in 6.221449694s

• [SLOW TEST:6.513 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:20:26.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 22 12:20:26.281: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92110db9-3d11-11ea-ad91-0242ac110005" in namespace "e2e-tests-downward-api-cr8wn" to be "success or failure"
Jan 22 12:20:26.337: INFO: Pod "downwardapi-volume-92110db9-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 56.001917ms
Jan 22 12:20:28.352: INFO: Pod "downwardapi-volume-92110db9-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071119133s
Jan 22 12:20:30.378: INFO: Pod "downwardapi-volume-92110db9-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097361985s
Jan 22 12:20:32.389: INFO: Pod "downwardapi-volume-92110db9-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107721456s
Jan 22 12:20:34.492: INFO: Pod "downwardapi-volume-92110db9-3d11-11ea-ad91-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.211067396s
Jan 22 12:20:36.535: INFO: Pod "downwardapi-volume-92110db9-3d11-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.254353725s
STEP: Saw pod success
Jan 22 12:20:36.536: INFO: Pod "downwardapi-volume-92110db9-3d11-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:20:36.558: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-92110db9-3d11-11ea-ad91-0242ac110005 container client-container: 
STEP: delete the pod
Jan 22 12:20:37.357: INFO: Waiting for pod downwardapi-volume-92110db9-3d11-11ea-ad91-0242ac110005 to disappear
Jan 22 12:20:37.434: INFO: Pod downwardapi-volume-92110db9-3d11-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:20:37.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-cr8wn" for this suite.
Jan 22 12:20:43.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:20:43.536: INFO: namespace: e2e-tests-downward-api-cr8wn, resource: bindings, ignored listing per whitelist
Jan 22 12:20:43.655: INFO: namespace e2e-tests-downward-api-cr8wn deletion completed in 6.207141964s

• [SLOW TEST:17.529 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:20:43.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-9c94f620-3d11-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 22 12:20:43.949: INFO: Waiting up to 5m0s for pod "pod-secrets-9c95de79-3d11-11ea-ad91-0242ac110005" in namespace "e2e-tests-secrets-6wpdg" to be "success or failure"
Jan 22 12:20:43.967: INFO: Pod "pod-secrets-9c95de79-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.46304ms
Jan 22 12:20:46.188: INFO: Pod "pod-secrets-9c95de79-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238509272s
Jan 22 12:20:48.219: INFO: Pod "pod-secrets-9c95de79-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.26966817s
Jan 22 12:20:50.602: INFO: Pod "pod-secrets-9c95de79-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.653078044s
Jan 22 12:20:52.696: INFO: Pod "pod-secrets-9c95de79-3d11-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.746679035s
STEP: Saw pod success
Jan 22 12:20:52.696: INFO: Pod "pod-secrets-9c95de79-3d11-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:20:52.711: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-9c95de79-3d11-11ea-ad91-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 22 12:20:52.978: INFO: Waiting for pod pod-secrets-9c95de79-3d11-11ea-ad91-0242ac110005 to disappear
Jan 22 12:20:52.998: INFO: Pod pod-secrets-9c95de79-3d11-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:20:52.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-6wpdg" for this suite.
Jan 22 12:20:59.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:20:59.174: INFO: namespace: e2e-tests-secrets-6wpdg, resource: bindings, ignored listing per whitelist
Jan 22 12:20:59.257: INFO: namespace e2e-tests-secrets-6wpdg deletion completed in 6.253844525s

• [SLOW TEST:15.603 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:20:59.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan 22 12:20:59.449: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-swqrh" to be "success or failure"
Jan 22 12:20:59.466: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.41241ms
Jan 22 12:21:01.476: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027227468s
Jan 22 12:21:03.511: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061916184s
Jan 22 12:21:05.531: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08183687s
Jan 22 12:21:08.168: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.718945249s
Jan 22 12:21:10.186: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.737315432s
Jan 22 12:21:12.201: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.752059245s
STEP: Saw pod success
Jan 22 12:21:12.201: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 22 12:21:12.211: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 22 12:21:12.722: INFO: Waiting for pod pod-host-path-test to disappear
Jan 22 12:21:13.194: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:21:13.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-swqrh" for this suite.
Jan 22 12:21:19.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:21:19.424: INFO: namespace: e2e-tests-hostpath-swqrh, resource: bindings, ignored listing per whitelist
Jan 22 12:21:19.623: INFO: namespace e2e-tests-hostpath-swqrh deletion completed in 6.39860972s

• [SLOW TEST:20.366 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:21:19.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan 22 12:21:20.416: INFO: created pod pod-service-account-defaultsa
Jan 22 12:21:20.416: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 22 12:21:20.518: INFO: created pod pod-service-account-mountsa
Jan 22 12:21:20.518: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 22 12:21:20.547: INFO: created pod pod-service-account-nomountsa
Jan 22 12:21:20.547: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 22 12:21:20.575: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 22 12:21:20.575: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 22 12:21:20.685: INFO: created pod pod-service-account-mountsa-mountspec
Jan 22 12:21:20.685: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 22 12:21:20.694: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 22 12:21:20.694: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 22 12:21:20.732: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 22 12:21:20.732: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 22 12:21:20.758: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 22 12:21:20.758: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 22 12:21:20.899: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 22 12:21:20.900: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:21:20.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-57gmv" for this suite.
Jan 22 12:21:53.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:21:54.076: INFO: namespace: e2e-tests-svcaccounts-57gmv, resource: bindings, ignored listing per whitelist
Jan 22 12:21:54.121: INFO: namespace e2e-tests-svcaccounts-57gmv deletion completed in 33.016125064s

• [SLOW TEST:34.498 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:21:54.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 22 12:21:54.386: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c69467df-3d11-11ea-ad91-0242ac110005" in namespace "e2e-tests-downward-api-gcwb4" to be "success or failure"
Jan 22 12:21:54.489: INFO: Pod "downwardapi-volume-c69467df-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 102.705241ms
Jan 22 12:21:56.509: INFO: Pod "downwardapi-volume-c69467df-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122536865s
Jan 22 12:21:58.656: INFO: Pod "downwardapi-volume-c69467df-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.269420398s
Jan 22 12:22:00.674: INFO: Pod "downwardapi-volume-c69467df-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.288231842s
Jan 22 12:22:02.691: INFO: Pod "downwardapi-volume-c69467df-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.305139184s
Jan 22 12:22:04.708: INFO: Pod "downwardapi-volume-c69467df-3d11-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.321975139s
STEP: Saw pod success
Jan 22 12:22:04.708: INFO: Pod "downwardapi-volume-c69467df-3d11-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:22:04.716: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c69467df-3d11-11ea-ad91-0242ac110005 container client-container: 
STEP: delete the pod
Jan 22 12:22:04.902: INFO: Waiting for pod downwardapi-volume-c69467df-3d11-11ea-ad91-0242ac110005 to disappear
Jan 22 12:22:04.913: INFO: Pod downwardapi-volume-c69467df-3d11-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:22:04.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gcwb4" for this suite.
Jan 22 12:22:10.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:22:11.153: INFO: namespace: e2e-tests-downward-api-gcwb4, resource: bindings, ignored listing per whitelist
Jan 22 12:22:11.212: INFO: namespace e2e-tests-downward-api-gcwb4 deletion completed in 6.291442325s

• [SLOW TEST:17.090 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:22:11.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 22 12:22:11.868: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"d0e2d8d0-3d11-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0015db9ba), BlockOwnerDeletion:(*bool)(0xc0015db9bb)}}
Jan 22 12:22:11.927: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"d0c81e47-3d11-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0022ac912), BlockOwnerDeletion:(*bool)(0xc0022ac913)}}
Jan 22 12:22:12.006: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d0ccc490-3d11-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00232328a), BlockOwnerDeletion:(*bool)(0xc00232328b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:22:17.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-9lpjv" for this suite.
Jan 22 12:22:23.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:22:23.323: INFO: namespace: e2e-tests-gc-9lpjv, resource: bindings, ignored listing per whitelist
Jan 22 12:22:23.344: INFO: namespace e2e-tests-gc-9lpjv deletion completed in 6.302416306s

• [SLOW TEST:12.132 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:22:23.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-d7f4c8bb-3d11-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 22 12:22:23.629: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d7f65a23-3d11-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-njh9v" to be "success or failure"
Jan 22 12:22:23.642: INFO: Pod "pod-projected-configmaps-d7f65a23-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.245921ms
Jan 22 12:22:25.880: INFO: Pod "pod-projected-configmaps-d7f65a23-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251250596s
Jan 22 12:22:27.906: INFO: Pod "pod-projected-configmaps-d7f65a23-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.276751649s
Jan 22 12:22:30.466: INFO: Pod "pod-projected-configmaps-d7f65a23-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.837510337s
Jan 22 12:22:32.519: INFO: Pod "pod-projected-configmaps-d7f65a23-3d11-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.890335652s
Jan 22 12:22:34.607: INFO: Pod "pod-projected-configmaps-d7f65a23-3d11-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.978193756s
STEP: Saw pod success
Jan 22 12:22:34.607: INFO: Pod "pod-projected-configmaps-d7f65a23-3d11-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:22:34.619: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d7f65a23-3d11-11ea-ad91-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 22 12:22:34.807: INFO: Waiting for pod pod-projected-configmaps-d7f65a23-3d11-11ea-ad91-0242ac110005 to disappear
Jan 22 12:22:34.822: INFO: Pod pod-projected-configmaps-d7f65a23-3d11-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:22:34.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-njh9v" for this suite.
Jan 22 12:22:41.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:22:41.057: INFO: namespace: e2e-tests-projected-njh9v, resource: bindings, ignored listing per whitelist
Jan 22 12:22:41.149: INFO: namespace e2e-tests-projected-njh9v deletion completed in 6.318282001s

• [SLOW TEST:17.804 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:22:41.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 22 12:23:05.495: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qnfnh PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 22 12:23:05.495: INFO: >>> kubeConfig: /root/.kube/config
I0122 12:23:05.578111       8 log.go:172] (0xc0020d8160) (0xc0029701e0) Create stream
I0122 12:23:05.578249       8 log.go:172] (0xc0020d8160) (0xc0029701e0) Stream added, broadcasting: 1
I0122 12:23:05.588391       8 log.go:172] (0xc0020d8160) Reply frame received for 1
I0122 12:23:05.588470       8 log.go:172] (0xc0020d8160) (0xc002970280) Create stream
I0122 12:23:05.588494       8 log.go:172] (0xc0020d8160) (0xc002970280) Stream added, broadcasting: 3
I0122 12:23:05.590335       8 log.go:172] (0xc0020d8160) Reply frame received for 3
I0122 12:23:05.590389       8 log.go:172] (0xc0020d8160) (0xc002836000) Create stream
I0122 12:23:05.590414       8 log.go:172] (0xc0020d8160) (0xc002836000) Stream added, broadcasting: 5
I0122 12:23:05.591721       8 log.go:172] (0xc0020d8160) Reply frame received for 5
I0122 12:23:05.729750       8 log.go:172] (0xc0020d8160) Data frame received for 3
I0122 12:23:05.729831       8 log.go:172] (0xc002970280) (3) Data frame handling
I0122 12:23:05.729868       8 log.go:172] (0xc002970280) (3) Data frame sent
I0122 12:23:05.879609       8 log.go:172] (0xc0020d8160) (0xc002970280) Stream removed, broadcasting: 3
I0122 12:23:05.879901       8 log.go:172] (0xc0020d8160) Data frame received for 1
I0122 12:23:05.879920       8 log.go:172] (0xc0029701e0) (1) Data frame handling
I0122 12:23:05.879948       8 log.go:172] (0xc0029701e0) (1) Data frame sent
I0122 12:23:05.880072       8 log.go:172] (0xc0020d8160) (0xc0029701e0) Stream removed, broadcasting: 1
I0122 12:23:05.880340       8 log.go:172] (0xc0020d8160) (0xc002836000) Stream removed, broadcasting: 5
I0122 12:23:05.880366       8 log.go:172] (0xc0020d8160) Go away received
I0122 12:23:05.881280       8 log.go:172] (0xc0020d8160) (0xc0029701e0) Stream removed, broadcasting: 1
I0122 12:23:05.881330       8 log.go:172] (0xc0020d8160) (0xc002970280) Stream removed, broadcasting: 3
I0122 12:23:05.881358       8 log.go:172] (0xc0020d8160) (0xc002836000) Stream removed, broadcasting: 5
Jan 22 12:23:05.881: INFO: Exec stderr: ""
Jan 22 12:23:05.881: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qnfnh PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 22 12:23:05.881: INFO: >>> kubeConfig: /root/.kube/config
I0122 12:23:05.966074       8 log.go:172] (0xc00090f340) (0xc00269e140) Create stream
I0122 12:23:05.966193       8 log.go:172] (0xc00090f340) (0xc00269e140) Stream added, broadcasting: 1
I0122 12:23:05.974733       8 log.go:172] (0xc00090f340) Reply frame received for 1
I0122 12:23:05.974789       8 log.go:172] (0xc00090f340) (0xc002836280) Create stream
I0122 12:23:05.974806       8 log.go:172] (0xc00090f340) (0xc002836280) Stream added, broadcasting: 3
I0122 12:23:05.975843       8 log.go:172] (0xc00090f340) Reply frame received for 3
I0122 12:23:05.975888       8 log.go:172] (0xc00090f340) (0xc000d86000) Create stream
I0122 12:23:05.975909       8 log.go:172] (0xc00090f340) (0xc000d86000) Stream added, broadcasting: 5
I0122 12:23:05.977069       8 log.go:172] (0xc00090f340) Reply frame received for 5
I0122 12:23:06.119991       8 log.go:172] (0xc00090f340) Data frame received for 3
I0122 12:23:06.120081       8 log.go:172] (0xc002836280) (3) Data frame handling
I0122 12:23:06.120121       8 log.go:172] (0xc002836280) (3) Data frame sent
I0122 12:23:06.262881       8 log.go:172] (0xc00090f340) Data frame received for 1
I0122 12:23:06.262965       8 log.go:172] (0xc00090f340) (0xc000d86000) Stream removed, broadcasting: 5
I0122 12:23:06.263069       8 log.go:172] (0xc00269e140) (1) Data frame handling
I0122 12:23:06.263102       8 log.go:172] (0xc00090f340) (0xc002836280) Stream removed, broadcasting: 3
I0122 12:23:06.263141       8 log.go:172] (0xc00269e140) (1) Data frame sent
I0122 12:23:06.263160       8 log.go:172] (0xc00090f340) (0xc00269e140) Stream removed, broadcasting: 1
I0122 12:23:06.263183       8 log.go:172] (0xc00090f340) Go away received
I0122 12:23:06.263586       8 log.go:172] (0xc00090f340) (0xc00269e140) Stream removed, broadcasting: 1
I0122 12:23:06.263622       8 log.go:172] (0xc00090f340) (0xc002836280) Stream removed, broadcasting: 3
I0122 12:23:06.263637       8 log.go:172] (0xc00090f340) (0xc000d86000) Stream removed, broadcasting: 5
Jan 22 12:23:06.263: INFO: Exec stderr: ""
Jan 22 12:23:06.263: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qnfnh PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 22 12:23:06.264: INFO: >>> kubeConfig: /root/.kube/config
I0122 12:23:06.355526       8 log.go:172] (0xc000f86370) (0xc000d863c0) Create stream
I0122 12:23:06.355675       8 log.go:172] (0xc000f86370) (0xc000d863c0) Stream added, broadcasting: 1
I0122 12:23:06.365047       8 log.go:172] (0xc000f86370) Reply frame received for 1
I0122 12:23:06.365112       8 log.go:172] (0xc000f86370) (0xc0028363c0) Create stream
I0122 12:23:06.365125       8 log.go:172] (0xc000f86370) (0xc0028363c0) Stream added, broadcasting: 3
I0122 12:23:06.366088       8 log.go:172] (0xc000f86370) Reply frame received for 3
I0122 12:23:06.366112       8 log.go:172] (0xc000f86370) (0xc000d86460) Create stream
I0122 12:23:06.366121       8 log.go:172] (0xc000f86370) (0xc000d86460) Stream added, broadcasting: 5
I0122 12:23:06.367207       8 log.go:172] (0xc000f86370) Reply frame received for 5
I0122 12:23:06.594528       8 log.go:172] (0xc000f86370) Data frame received for 3
I0122 12:23:06.594658       8 log.go:172] (0xc0028363c0) (3) Data frame handling
I0122 12:23:06.594694       8 log.go:172] (0xc0028363c0) (3) Data frame sent
I0122 12:23:06.833454       8 log.go:172] (0xc000f86370) (0xc0028363c0) Stream removed, broadcasting: 3
I0122 12:23:06.833740       8 log.go:172] (0xc000f86370) Data frame received for 1
I0122 12:23:06.833824       8 log.go:172] (0xc000f86370) (0xc000d86460) Stream removed, broadcasting: 5
I0122 12:23:06.833847       8 log.go:172] (0xc000d863c0) (1) Data frame handling
I0122 12:23:06.833885       8 log.go:172] (0xc000d863c0) (1) Data frame sent
I0122 12:23:06.833897       8 log.go:172] (0xc000f86370) (0xc000d863c0) Stream removed, broadcasting: 1
I0122 12:23:06.833908       8 log.go:172] (0xc000f86370) Go away received
I0122 12:23:06.834225       8 log.go:172] (0xc000f86370) (0xc000d863c0) Stream removed, broadcasting: 1
I0122 12:23:06.834238       8 log.go:172] (0xc000f86370) (0xc0028363c0) Stream removed, broadcasting: 3
I0122 12:23:06.834267       8 log.go:172] (0xc000f86370) (0xc000d86460) Stream removed, broadcasting: 5
Jan 22 12:23:06.834: INFO: Exec stderr: ""
Jan 22 12:23:06.834: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qnfnh PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 22 12:23:06.834: INFO: >>> kubeConfig: /root/.kube/config
I0122 12:23:06.968810       8 log.go:172] (0xc000b862c0) (0xc002836780) Create stream
I0122 12:23:06.968948       8 log.go:172] (0xc000b862c0) (0xc002836780) Stream added, broadcasting: 1
I0122 12:23:06.979165       8 log.go:172] (0xc000b862c0) Reply frame received for 1
I0122 12:23:06.979352       8 log.go:172] (0xc000b862c0) (0xc0027320a0) Create stream
I0122 12:23:06.979373       8 log.go:172] (0xc000b862c0) (0xc0027320a0) Stream added, broadcasting: 3
I0122 12:23:06.980997       8 log.go:172] (0xc000b862c0) Reply frame received for 3
I0122 12:23:06.981035       8 log.go:172] (0xc000b862c0) (0xc002732140) Create stream
I0122 12:23:06.981043       8 log.go:172] (0xc000b862c0) (0xc002732140) Stream added, broadcasting: 5
I0122 12:23:06.982733       8 log.go:172] (0xc000b862c0) Reply frame received for 5
I0122 12:23:07.140374       8 log.go:172] (0xc000b862c0) Data frame received for 3
I0122 12:23:07.140463       8 log.go:172] (0xc0027320a0) (3) Data frame handling
I0122 12:23:07.140499       8 log.go:172] (0xc0027320a0) (3) Data frame sent
I0122 12:23:07.252975       8 log.go:172] (0xc000b862c0) Data frame received for 1
I0122 12:23:07.253034       8 log.go:172] (0xc002836780) (1) Data frame handling
I0122 12:23:07.253068       8 log.go:172] (0xc002836780) (1) Data frame sent
I0122 12:23:07.253249       8 log.go:172] (0xc000b862c0) (0xc002836780) Stream removed, broadcasting: 1
I0122 12:23:07.253573       8 log.go:172] (0xc000b862c0) (0xc0027320a0) Stream removed, broadcasting: 3
I0122 12:23:07.253603       8 log.go:172] (0xc000b862c0) (0xc002732140) Stream removed, broadcasting: 5
I0122 12:23:07.253684       8 log.go:172] (0xc000b862c0) (0xc002836780) Stream removed, broadcasting: 1
I0122 12:23:07.253692       8 log.go:172] (0xc000b862c0) (0xc0027320a0) Stream removed, broadcasting: 3
I0122 12:23:07.253699       8 log.go:172] (0xc000b862c0) (0xc002732140) Stream removed, broadcasting: 5
I0122 12:23:07.253706       8 log.go:172] (0xc000b862c0) Go away received
Jan 22 12:23:07.253: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 22 12:23:07.253: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qnfnh PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 22 12:23:07.253: INFO: >>> kubeConfig: /root/.kube/config
I0122 12:23:07.317443       8 log.go:172] (0xc0015322c0) (0xc0027323c0) Create stream
I0122 12:23:07.317503       8 log.go:172] (0xc0015322c0) (0xc0027323c0) Stream added, broadcasting: 1
I0122 12:23:07.321632       8 log.go:172] (0xc0015322c0) Reply frame received for 1
I0122 12:23:07.321706       8 log.go:172] (0xc0015322c0) (0xc00269e1e0) Create stream
I0122 12:23:07.321736       8 log.go:172] (0xc0015322c0) (0xc00269e1e0) Stream added, broadcasting: 3
I0122 12:23:07.323315       8 log.go:172] (0xc0015322c0) Reply frame received for 3
I0122 12:23:07.323382       8 log.go:172] (0xc0015322c0) (0xc000d86500) Create stream
I0122 12:23:07.323397       8 log.go:172] (0xc0015322c0) (0xc000d86500) Stream added, broadcasting: 5
I0122 12:23:07.324899       8 log.go:172] (0xc0015322c0) Reply frame received for 5
I0122 12:23:07.412220       8 log.go:172] (0xc0015322c0) Data frame received for 3
I0122 12:23:07.412390       8 log.go:172] (0xc00269e1e0) (3) Data frame handling
I0122 12:23:07.412421       8 log.go:172] (0xc00269e1e0) (3) Data frame sent
I0122 12:23:07.501446       8 log.go:172] (0xc0015322c0) Data frame received for 1
I0122 12:23:07.501534       8 log.go:172] (0xc0027323c0) (1) Data frame handling
I0122 12:23:07.501568       8 log.go:172] (0xc0027323c0) (1) Data frame sent
I0122 12:23:07.501646       8 log.go:172] (0xc0015322c0) (0xc0027323c0) Stream removed, broadcasting: 1
I0122 12:23:07.501754       8 log.go:172] (0xc0015322c0) (0xc000d86500) Stream removed, broadcasting: 5
I0122 12:23:07.501818       8 log.go:172] (0xc0015322c0) (0xc00269e1e0) Stream removed, broadcasting: 3
I0122 12:23:07.501861       8 log.go:172] (0xc0015322c0) Go away received
I0122 12:23:07.501948       8 log.go:172] (0xc0015322c0) (0xc0027323c0) Stream removed, broadcasting: 1
I0122 12:23:07.501966       8 log.go:172] (0xc0015322c0) (0xc00269e1e0) Stream removed, broadcasting: 3
I0122 12:23:07.501979       8 log.go:172] (0xc0015322c0) (0xc000d86500) Stream removed, broadcasting: 5
Jan 22 12:23:07.502: INFO: Exec stderr: ""
Jan 22 12:23:07.502: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qnfnh PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 22 12:23:07.502: INFO: >>> kubeConfig: /root/.kube/config
I0122 12:23:07.567383       8 log.go:172] (0xc001532790) (0xc0027326e0) Create stream
I0122 12:23:07.567419       8 log.go:172] (0xc001532790) (0xc0027326e0) Stream added, broadcasting: 1
I0122 12:23:07.570496       8 log.go:172] (0xc001532790) Reply frame received for 1
I0122 12:23:07.570535       8 log.go:172] (0xc001532790) (0xc002732780) Create stream
I0122 12:23:07.570575       8 log.go:172] (0xc001532790) (0xc002732780) Stream added, broadcasting: 3
I0122 12:23:07.571710       8 log.go:172] (0xc001532790) Reply frame received for 3
I0122 12:23:07.571751       8 log.go:172] (0xc001532790) (0xc002970320) Create stream
I0122 12:23:07.571766       8 log.go:172] (0xc001532790) (0xc002970320) Stream added, broadcasting: 5
I0122 12:23:07.573949       8 log.go:172] (0xc001532790) Reply frame received for 5
I0122 12:23:07.672406       8 log.go:172] (0xc001532790) Data frame received for 3
I0122 12:23:07.672483       8 log.go:172] (0xc002732780) (3) Data frame handling
I0122 12:23:07.672512       8 log.go:172] (0xc002732780) (3) Data frame sent
I0122 12:23:07.774178       8 log.go:172] (0xc001532790) Data frame received for 1
I0122 12:23:07.774320       8 log.go:172] (0xc0027326e0) (1) Data frame handling
I0122 12:23:07.774358       8 log.go:172] (0xc0027326e0) (1) Data frame sent
I0122 12:23:07.774391       8 log.go:172] (0xc001532790) (0xc0027326e0) Stream removed, broadcasting: 1
I0122 12:23:07.774419       8 log.go:172] (0xc001532790) (0xc002732780) Stream removed, broadcasting: 3
I0122 12:23:07.774478       8 log.go:172] (0xc001532790) (0xc002970320) Stream removed, broadcasting: 5
I0122 12:23:07.774587       8 log.go:172] (0xc001532790) Go away received
I0122 12:23:07.774892       8 log.go:172] (0xc001532790) (0xc0027326e0) Stream removed, broadcasting: 1
I0122 12:23:07.774928       8 log.go:172] (0xc001532790) (0xc002732780) Stream removed, broadcasting: 3
I0122 12:23:07.774944       8 log.go:172] (0xc001532790) (0xc002970320) Stream removed, broadcasting: 5
Jan 22 12:23:07.774: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 22 12:23:07.775: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qnfnh PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 22 12:23:07.775: INFO: >>> kubeConfig: /root/.kube/config
I0122 12:23:07.867115       8 log.go:172] (0xc000b86790) (0xc002836a00) Create stream
I0122 12:23:07.867191       8 log.go:172] (0xc000b86790) (0xc002836a00) Stream added, broadcasting: 1
I0122 12:23:07.877440       8 log.go:172] (0xc000b86790) Reply frame received for 1
I0122 12:23:07.877575       8 log.go:172] (0xc000b86790) (0xc0029703c0) Create stream
I0122 12:23:07.877595       8 log.go:172] (0xc000b86790) (0xc0029703c0) Stream added, broadcasting: 3
I0122 12:23:07.880161       8 log.go:172] (0xc000b86790) Reply frame received for 3
I0122 12:23:07.880221       8 log.go:172] (0xc000b86790) (0xc000d865a0) Create stream
I0122 12:23:07.880243       8 log.go:172] (0xc000b86790) (0xc000d865a0) Stream added, broadcasting: 5
I0122 12:23:07.882943       8 log.go:172] (0xc000b86790) Reply frame received for 5
I0122 12:23:07.988725       8 log.go:172] (0xc000b86790) Data frame received for 3
I0122 12:23:07.988817       8 log.go:172] (0xc0029703c0) (3) Data frame handling
I0122 12:23:07.988870       8 log.go:172] (0xc0029703c0) (3) Data frame sent
I0122 12:23:08.092583       8 log.go:172] (0xc000b86790) Data frame received for 1
I0122 12:23:08.092635       8 log.go:172] (0xc002836a00) (1) Data frame handling
I0122 12:23:08.092658       8 log.go:172] (0xc002836a00) (1) Data frame sent
I0122 12:23:08.092836       8 log.go:172] (0xc000b86790) (0xc0029703c0) Stream removed, broadcasting: 3
I0122 12:23:08.092884       8 log.go:172] (0xc000b86790) (0xc002836a00) Stream removed, broadcasting: 1
I0122 12:23:08.093326       8 log.go:172] (0xc000b86790) (0xc000d865a0) Stream removed, broadcasting: 5
I0122 12:23:08.093408       8 log.go:172] (0xc000b86790) Go away received
I0122 12:23:08.093472       8 log.go:172] (0xc000b86790) (0xc002836a00) Stream removed, broadcasting: 1
I0122 12:23:08.093492       8 log.go:172] (0xc000b86790) (0xc0029703c0) Stream removed, broadcasting: 3
I0122 12:23:08.093500       8 log.go:172] (0xc000b86790) (0xc000d865a0) Stream removed, broadcasting: 5
Jan 22 12:23:08.093: INFO: Exec stderr: ""
Jan 22 12:23:08.093: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qnfnh PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 22 12:23:08.093: INFO: >>> kubeConfig: /root/.kube/config
I0122 12:23:08.160045       8 log.go:172] (0xc000b86c60) (0xc002836be0) Create stream
I0122 12:23:08.160109       8 log.go:172] (0xc000b86c60) (0xc002836be0) Stream added, broadcasting: 1
I0122 12:23:08.163529       8 log.go:172] (0xc000b86c60) Reply frame received for 1
I0122 12:23:08.163554       8 log.go:172] (0xc000b86c60) (0xc00269e280) Create stream
I0122 12:23:08.163561       8 log.go:172] (0xc000b86c60) (0xc00269e280) Stream added, broadcasting: 3
I0122 12:23:08.164256       8 log.go:172] (0xc000b86c60) Reply frame received for 3
I0122 12:23:08.164271       8 log.go:172] (0xc000b86c60) (0xc002836c80) Create stream
I0122 12:23:08.164279       8 log.go:172] (0xc000b86c60) (0xc002836c80) Stream added, broadcasting: 5
I0122 12:23:08.165214       8 log.go:172] (0xc000b86c60) Reply frame received for 5
I0122 12:23:08.279080       8 log.go:172] (0xc000b86c60) Data frame received for 3
I0122 12:23:08.279157       8 log.go:172] (0xc00269e280) (3) Data frame handling
I0122 12:23:08.279186       8 log.go:172] (0xc00269e280) (3) Data frame sent
I0122 12:23:08.427279       8 log.go:172] (0xc000b86c60) (0xc00269e280) Stream removed, broadcasting: 3
I0122 12:23:08.427441       8 log.go:172] (0xc000b86c60) Data frame received for 1
I0122 12:23:08.427453       8 log.go:172] (0xc002836be0) (1) Data frame handling
I0122 12:23:08.427465       8 log.go:172] (0xc002836be0) (1) Data frame sent
I0122 12:23:08.427590       8 log.go:172] (0xc000b86c60) (0xc002836be0) Stream removed, broadcasting: 1
I0122 12:23:08.427713       8 log.go:172] (0xc000b86c60) (0xc002836c80) Stream removed, broadcasting: 5
I0122 12:23:08.427872       8 log.go:172] (0xc000b86c60) Go away received
I0122 12:23:08.427948       8 log.go:172] (0xc000b86c60) (0xc002836be0) Stream removed, broadcasting: 1
I0122 12:23:08.427973       8 log.go:172] (0xc000b86c60) (0xc00269e280) Stream removed, broadcasting: 3
I0122 12:23:08.427987       8 log.go:172] (0xc000b86c60) (0xc002836c80) Stream removed, broadcasting: 5
Jan 22 12:23:08.428: INFO: Exec stderr: ""
Jan 22 12:23:08.428: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qnfnh PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 22 12:23:08.428: INFO: >>> kubeConfig: /root/.kube/config
I0122 12:23:08.564169       8 log.go:172] (0xc0020d86e0) (0xc002970640) Create stream
I0122 12:23:08.564531       8 log.go:172] (0xc0020d86e0) (0xc002970640) Stream added, broadcasting: 1
I0122 12:23:08.578048       8 log.go:172] (0xc0020d86e0) Reply frame received for 1
I0122 12:23:08.578309       8 log.go:172] (0xc0020d86e0) (0xc0029706e0) Create stream
I0122 12:23:08.578329       8 log.go:172] (0xc0020d86e0) (0xc0029706e0) Stream added, broadcasting: 3
I0122 12:23:08.582174       8 log.go:172] (0xc0020d86e0) Reply frame received for 3
I0122 12:23:08.582230       8 log.go:172] (0xc0020d86e0) (0xc00269e320) Create stream
I0122 12:23:08.582259       8 log.go:172] (0xc0020d86e0) (0xc00269e320) Stream added, broadcasting: 5
I0122 12:23:08.583628       8 log.go:172] (0xc0020d86e0) Reply frame received for 5
I0122 12:23:08.734883       8 log.go:172] (0xc0020d86e0) Data frame received for 3
I0122 12:23:08.734953       8 log.go:172] (0xc0029706e0) (3) Data frame handling
I0122 12:23:08.734983       8 log.go:172] (0xc0029706e0) (3) Data frame sent
I0122 12:23:08.844103       8 log.go:172] (0xc0020d86e0) (0xc00269e320) Stream removed, broadcasting: 5
I0122 12:23:08.844376       8 log.go:172] (0xc0020d86e0) Data frame received for 1
I0122 12:23:08.844396       8 log.go:172] (0xc002970640) (1) Data frame handling
I0122 12:23:08.844424       8 log.go:172] (0xc002970640) (1) Data frame sent
I0122 12:23:08.844702       8 log.go:172] (0xc0020d86e0) (0xc0029706e0) Stream removed, broadcasting: 3
I0122 12:23:08.844796       8 log.go:172] (0xc0020d86e0) (0xc002970640) Stream removed, broadcasting: 1
I0122 12:23:08.844842       8 log.go:172] (0xc0020d86e0) Go away received
I0122 12:23:08.845204       8 log.go:172] (0xc0020d86e0) (0xc002970640) Stream removed, broadcasting: 1
I0122 12:23:08.845258       8 log.go:172] (0xc0020d86e0) (0xc0029706e0) Stream removed, broadcasting: 3
I0122 12:23:08.845347       8 log.go:172] (0xc0020d86e0) (0xc00269e320) Stream removed, broadcasting: 5
Jan 22 12:23:08.845: INFO: Exec stderr: ""
Jan 22 12:23:08.845: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qnfnh PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 22 12:23:08.845: INFO: >>> kubeConfig: /root/.kube/config
I0122 12:23:08.929157       8 log.go:172] (0xc00090f8c0) (0xc00269e5a0) Create stream
I0122 12:23:08.929225       8 log.go:172] (0xc00090f8c0) (0xc00269e5a0) Stream added, broadcasting: 1
I0122 12:23:08.933188       8 log.go:172] (0xc00090f8c0) Reply frame received for 1
I0122 12:23:08.933274       8 log.go:172] (0xc00090f8c0) (0xc002836dc0) Create stream
I0122 12:23:08.933293       8 log.go:172] (0xc00090f8c0) (0xc002836dc0) Stream added, broadcasting: 3
I0122 12:23:08.935847       8 log.go:172] (0xc00090f8c0) Reply frame received for 3
I0122 12:23:08.935953       8 log.go:172] (0xc00090f8c0) (0xc002836e60) Create stream
I0122 12:23:08.936054       8 log.go:172] (0xc00090f8c0) (0xc002836e60) Stream added, broadcasting: 5
I0122 12:23:08.937030       8 log.go:172] (0xc00090f8c0) Reply frame received for 5
I0122 12:23:09.034925       8 log.go:172] (0xc00090f8c0) Data frame received for 3
I0122 12:23:09.034999       8 log.go:172] (0xc002836dc0) (3) Data frame handling
I0122 12:23:09.035026       8 log.go:172] (0xc002836dc0) (3) Data frame sent
I0122 12:23:09.173359       8 log.go:172] (0xc00090f8c0) Data frame received for 1
I0122 12:23:09.173499       8 log.go:172] (0xc00090f8c0) (0xc002836e60) Stream removed, broadcasting: 5
I0122 12:23:09.173557       8 log.go:172] (0xc00269e5a0) (1) Data frame handling
I0122 12:23:09.173593       8 log.go:172] (0xc00269e5a0) (1) Data frame sent
I0122 12:23:09.173645       8 log.go:172] (0xc00090f8c0) (0xc002836dc0) Stream removed, broadcasting: 3
I0122 12:23:09.173696       8 log.go:172] (0xc00090f8c0) (0xc00269e5a0) Stream removed, broadcasting: 1
I0122 12:23:09.173738       8 log.go:172] (0xc00090f8c0) Go away received
I0122 12:23:09.174088       8 log.go:172] (0xc00090f8c0) (0xc00269e5a0) Stream removed, broadcasting: 1
I0122 12:23:09.174110       8 log.go:172] (0xc00090f8c0) (0xc002836dc0) Stream removed, broadcasting: 3
I0122 12:23:09.174127       8 log.go:172] (0xc00090f8c0) (0xc002836e60) Stream removed, broadcasting: 5
Jan 22 12:23:09.174: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:23:09.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-qnfnh" for this suite.
Jan 22 12:24:05.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:24:05.363: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-qnfnh, resource: bindings, ignored listing per whitelist
Jan 22 12:24:05.592: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-qnfnh deletion completed in 56.368590029s

• [SLOW TEST:84.444 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:24:05.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:24:05.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-2z6ms" for this suite.
Jan 22 12:24:12.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:24:12.116: INFO: namespace: e2e-tests-kubelet-test-2z6ms, resource: bindings, ignored listing per whitelist
Jan 22 12:24:12.205: INFO: namespace e2e-tests-kubelet-test-2z6ms deletion completed in 6.201849278s

• [SLOW TEST:6.612 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:24:12.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 22 12:24:12.393: INFO: Waiting up to 5m0s for pod "downward-api-18d7b5c2-3d12-11ea-ad91-0242ac110005" in namespace "e2e-tests-downward-api-r676j" to be "success or failure"
Jan 22 12:24:12.417: INFO: Pod "downward-api-18d7b5c2-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.005043ms
Jan 22 12:24:14.595: INFO: Pod "downward-api-18d7b5c2-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202240111s
Jan 22 12:24:16.616: INFO: Pod "downward-api-18d7b5c2-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223536076s
Jan 22 12:24:18.708: INFO: Pod "downward-api-18d7b5c2-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.3149761s
Jan 22 12:24:20.723: INFO: Pod "downward-api-18d7b5c2-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.329981442s
Jan 22 12:24:22.758: INFO: Pod "downward-api-18d7b5c2-3d12-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.365422287s
STEP: Saw pod success
Jan 22 12:24:22.758: INFO: Pod "downward-api-18d7b5c2-3d12-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:24:22.771: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-18d7b5c2-3d12-11ea-ad91-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 22 12:24:22.858: INFO: Waiting for pod downward-api-18d7b5c2-3d12-11ea-ad91-0242ac110005 to disappear
Jan 22 12:24:22.976: INFO: Pod downward-api-18d7b5c2-3d12-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:24:22.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-r676j" for this suite.
Jan 22 12:24:29.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:24:29.241: INFO: namespace: e2e-tests-downward-api-r676j, resource: bindings, ignored listing per whitelist
Jan 22 12:24:29.262: INFO: namespace e2e-tests-downward-api-r676j deletion completed in 6.278675603s

• [SLOW TEST:17.055 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:24:29.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-230dce72-3d12-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 22 12:24:29.550: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-230fb723-3d12-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-c86l7" to be "success or failure"
Jan 22 12:24:29.566: INFO: Pod "pod-projected-configmaps-230fb723-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.086152ms
Jan 22 12:24:31.581: INFO: Pod "pod-projected-configmaps-230fb723-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030336409s
Jan 22 12:24:33.616: INFO: Pod "pod-projected-configmaps-230fb723-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065930079s
Jan 22 12:24:35.815: INFO: Pod "pod-projected-configmaps-230fb723-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.264851947s
Jan 22 12:24:38.029: INFO: Pod "pod-projected-configmaps-230fb723-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.479184761s
Jan 22 12:24:40.528: INFO: Pod "pod-projected-configmaps-230fb723-3d12-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.977446776s
STEP: Saw pod success
Jan 22 12:24:40.528: INFO: Pod "pod-projected-configmaps-230fb723-3d12-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:24:40.566: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-230fb723-3d12-11ea-ad91-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 22 12:24:41.058: INFO: Waiting for pod pod-projected-configmaps-230fb723-3d12-11ea-ad91-0242ac110005 to disappear
Jan 22 12:24:41.109: INFO: Pod pod-projected-configmaps-230fb723-3d12-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:24:41.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c86l7" for this suite.
Jan 22 12:24:47.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:24:47.397: INFO: namespace: e2e-tests-projected-c86l7, resource: bindings, ignored listing per whitelist
Jan 22 12:24:47.419: INFO: namespace e2e-tests-projected-c86l7 deletion completed in 6.300315452s

• [SLOW TEST:18.157 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:24:47.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-2dd41cc1-3d12-11ea-ad91-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:24:59.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-26vd8" for this suite.
Jan 22 12:25:23.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:25:23.977: INFO: namespace: e2e-tests-configmap-26vd8, resource: bindings, ignored listing per whitelist
Jan 22 12:25:24.167: INFO: namespace e2e-tests-configmap-26vd8 deletion completed in 24.369036083s

• [SLOW TEST:36.748 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:25:24.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0122 12:25:26.894817       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 22 12:25:26.895: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:25:26.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-cq4dj" for this suite.
Jan 22 12:25:35.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:25:35.656: INFO: namespace: e2e-tests-gc-cq4dj, resource: bindings, ignored listing per whitelist
Jan 22 12:25:35.768: INFO: namespace e2e-tests-gc-cq4dj deletion completed in 8.867369105s

• [SLOW TEST:11.601 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:25:35.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 22 12:25:35.974: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4aaa73ed-3d12-11ea-ad91-0242ac110005" in namespace "e2e-tests-downward-api-7zf6r" to be "success or failure"
Jan 22 12:25:35.992: INFO: Pod "downwardapi-volume-4aaa73ed-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.260372ms
Jan 22 12:25:38.008: INFO: Pod "downwardapi-volume-4aaa73ed-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033539782s
Jan 22 12:25:40.036: INFO: Pod "downwardapi-volume-4aaa73ed-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062101843s
Jan 22 12:25:42.375: INFO: Pod "downwardapi-volume-4aaa73ed-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.400749566s
Jan 22 12:25:44.457: INFO: Pod "downwardapi-volume-4aaa73ed-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.48272251s
Jan 22 12:25:46.506: INFO: Pod "downwardapi-volume-4aaa73ed-3d12-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.531318981s
STEP: Saw pod success
Jan 22 12:25:46.506: INFO: Pod "downwardapi-volume-4aaa73ed-3d12-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:25:46.518: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4aaa73ed-3d12-11ea-ad91-0242ac110005 container client-container: 
STEP: delete the pod
Jan 22 12:25:46.747: INFO: Waiting for pod downwardapi-volume-4aaa73ed-3d12-11ea-ad91-0242ac110005 to disappear
Jan 22 12:25:46.819: INFO: Pod downwardapi-volume-4aaa73ed-3d12-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:25:46.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-7zf6r" for this suite.
Jan 22 12:25:52.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:25:52.919: INFO: namespace: e2e-tests-downward-api-7zf6r, resource: bindings, ignored listing per whitelist
Jan 22 12:25:53.005: INFO: namespace e2e-tests-downward-api-7zf6r deletion completed in 6.171566923s

• [SLOW TEST:17.236 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:25:53.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 22 12:25:53.220: INFO: Waiting up to 5m0s for pod "pod-54f23201-3d12-11ea-ad91-0242ac110005" in namespace "e2e-tests-emptydir-lw6wf" to be "success or failure"
Jan 22 12:25:53.231: INFO: Pod "pod-54f23201-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.492068ms
Jan 22 12:25:55.245: INFO: Pod "pod-54f23201-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02503477s
Jan 22 12:25:57.262: INFO: Pod "pod-54f23201-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042056694s
Jan 22 12:25:59.278: INFO: Pod "pod-54f23201-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058054012s
Jan 22 12:26:01.296: INFO: Pod "pod-54f23201-3d12-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076125404s
Jan 22 12:26:03.306: INFO: Pod "pod-54f23201-3d12-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086383424s
STEP: Saw pod success
Jan 22 12:26:03.306: INFO: Pod "pod-54f23201-3d12-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:26:03.309: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-54f23201-3d12-11ea-ad91-0242ac110005 container test-container: 
STEP: delete the pod
Jan 22 12:26:03.934: INFO: Waiting for pod pod-54f23201-3d12-11ea-ad91-0242ac110005 to disappear
Jan 22 12:26:04.302: INFO: Pod pod-54f23201-3d12-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:26:04.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lw6wf" for this suite.
Jan 22 12:26:10.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:26:10.635: INFO: namespace: e2e-tests-emptydir-lw6wf, resource: bindings, ignored listing per whitelist
Jan 22 12:26:10.669: INFO: namespace e2e-tests-emptydir-lw6wf deletion completed in 6.349758334s

• [SLOW TEST:17.664 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:26:10.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan 22 12:26:10.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 22 12:26:12.615: INFO: stderr: ""
Jan 22 12:26:12.615: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:26:12.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wgmw4" for this suite.
Jan 22 12:26:18.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:26:18.726: INFO: namespace: e2e-tests-kubectl-wgmw4, resource: bindings, ignored listing per whitelist
Jan 22 12:26:18.913: INFO: namespace e2e-tests-kubectl-wgmw4 deletion completed in 6.279854065s

• [SLOW TEST:8.243 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:26:18.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-99ncm
Jan 22 12:26:29.279: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-99ncm
STEP: checking the pod's current state and verifying that restartCount is present
Jan 22 12:26:29.284: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:30:30.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-99ncm" for this suite.
Jan 22 12:30:38.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:30:38.769: INFO: namespace: e2e-tests-container-probe-99ncm, resource: bindings, ignored listing per whitelist
Jan 22 12:30:38.864: INFO: namespace e2e-tests-container-probe-99ncm deletion completed in 8.330630099s

• [SLOW TEST:259.950 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:30:38.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:30:51.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-lhmsd" for this suite.
Jan 22 12:30:57.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:30:57.483: INFO: namespace: e2e-tests-kubelet-test-lhmsd, resource: bindings, ignored listing per whitelist
Jan 22 12:30:57.485: INFO: namespace e2e-tests-kubelet-test-lhmsd deletion completed in 6.347897855s

• [SLOW TEST:18.621 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:30:57.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-x5f24
Jan 22 12:31:07.688: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-x5f24
STEP: checking the pod's current state and verifying that restartCount is present
Jan 22 12:31:07.694: INFO: Initial restart count of pod liveness-http is 0
Jan 22 12:31:23.914: INFO: Restart count of pod e2e-tests-container-probe-x5f24/liveness-http is now 1 (16.219894352s elapsed)
Jan 22 12:31:42.400: INFO: Restart count of pod e2e-tests-container-probe-x5f24/liveness-http is now 2 (34.705521654s elapsed)
Jan 22 12:32:02.749: INFO: Restart count of pod e2e-tests-container-probe-x5f24/liveness-http is now 3 (55.054977558s elapsed)
Jan 22 12:32:21.068: INFO: Restart count of pod e2e-tests-container-probe-x5f24/liveness-http is now 4 (1m13.373773772s elapsed)
Jan 22 12:33:26.018: INFO: Restart count of pod e2e-tests-container-probe-x5f24/liveness-http is now 5 (2m18.323615792s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:33:26.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-x5f24" for this suite.
Jan 22 12:33:32.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:33:32.252: INFO: namespace: e2e-tests-container-probe-x5f24, resource: bindings, ignored listing per whitelist
Jan 22 12:33:32.338: INFO: namespace e2e-tests-container-probe-x5f24 deletion completed in 6.262130247s

• [SLOW TEST:154.852 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:33:32.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 22 12:33:32.698: INFO: Waiting up to 5m0s for pod "pod-66d094b3-3d13-11ea-ad91-0242ac110005" in namespace "e2e-tests-emptydir-jnm5t" to be "success or failure"
Jan 22 12:33:32.766: INFO: Pod "pod-66d094b3-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 67.728403ms
Jan 22 12:33:34.785: INFO: Pod "pod-66d094b3-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087442667s
Jan 22 12:33:36.798: INFO: Pod "pod-66d094b3-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100116553s
Jan 22 12:33:38.946: INFO: Pod "pod-66d094b3-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.248074149s
Jan 22 12:33:40.957: INFO: Pod "pod-66d094b3-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.259556699s
Jan 22 12:33:43.570: INFO: Pod "pod-66d094b3-3d13-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.872038598s
STEP: Saw pod success
Jan 22 12:33:43.570: INFO: Pod "pod-66d094b3-3d13-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:33:44.061: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-66d094b3-3d13-11ea-ad91-0242ac110005 container test-container: 
STEP: delete the pod
Jan 22 12:33:44.242: INFO: Waiting for pod pod-66d094b3-3d13-11ea-ad91-0242ac110005 to disappear
Jan 22 12:33:44.327: INFO: Pod pod-66d094b3-3d13-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:33:44.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jnm5t" for this suite.
Jan 22 12:33:50.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:33:50.734: INFO: namespace: e2e-tests-emptydir-jnm5t, resource: bindings, ignored listing per whitelist
Jan 22 12:33:50.740: INFO: namespace e2e-tests-emptydir-jnm5t deletion completed in 6.401810023s

• [SLOW TEST:18.402 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:33:50.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:33:51.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-lg7d5" for this suite.
Jan 22 12:34:15.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:34:15.466: INFO: namespace: e2e-tests-pods-lg7d5, resource: bindings, ignored listing per whitelist
Jan 22 12:34:15.501: INFO: namespace e2e-tests-pods-lg7d5 deletion completed in 24.377709211s

• [SLOW TEST:24.760 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:34:15.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 22 12:34:15.707: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8072bb78-3d13-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-qndxv" to be "success or failure"
Jan 22 12:34:15.826: INFO: Pod "downwardapi-volume-8072bb78-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 118.677655ms
Jan 22 12:34:17.850: INFO: Pod "downwardapi-volume-8072bb78-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142649337s
Jan 22 12:34:19.887: INFO: Pod "downwardapi-volume-8072bb78-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180133895s
Jan 22 12:34:21.906: INFO: Pod "downwardapi-volume-8072bb78-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.198867969s
Jan 22 12:34:23.922: INFO: Pod "downwardapi-volume-8072bb78-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.215130195s
Jan 22 12:34:25.938: INFO: Pod "downwardapi-volume-8072bb78-3d13-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.230720429s
STEP: Saw pod success
Jan 22 12:34:25.938: INFO: Pod "downwardapi-volume-8072bb78-3d13-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:34:25.945: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8072bb78-3d13-11ea-ad91-0242ac110005 container client-container: 
STEP: delete the pod
Jan 22 12:34:26.095: INFO: Waiting for pod downwardapi-volume-8072bb78-3d13-11ea-ad91-0242ac110005 to disappear
Jan 22 12:34:26.511: INFO: Pod downwardapi-volume-8072bb78-3d13-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:34:26.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qndxv" for this suite.
Jan 22 12:34:32.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:34:32.905: INFO: namespace: e2e-tests-projected-qndxv, resource: bindings, ignored listing per whitelist
Jan 22 12:34:32.935: INFO: namespace e2e-tests-projected-qndxv deletion completed in 6.372811388s

• [SLOW TEST:17.434 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:34:32.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 22 12:34:33.229: INFO: Waiting up to 5m0s for pod "pod-8ae37491-3d13-11ea-ad91-0242ac110005" in namespace "e2e-tests-emptydir-2l59n" to be "success or failure"
Jan 22 12:34:33.241: INFO: Pod "pod-8ae37491-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.74984ms
Jan 22 12:34:35.520: INFO: Pod "pod-8ae37491-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290857416s
Jan 22 12:34:37.597: INFO: Pod "pod-8ae37491-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367719002s
Jan 22 12:34:39.626: INFO: Pod "pod-8ae37491-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396871466s
Jan 22 12:34:41.806: INFO: Pod "pod-8ae37491-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.576652904s
Jan 22 12:34:43.833: INFO: Pod "pod-8ae37491-3d13-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.602968074s
STEP: Saw pod success
Jan 22 12:34:43.833: INFO: Pod "pod-8ae37491-3d13-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:34:43.840: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8ae37491-3d13-11ea-ad91-0242ac110005 container test-container: 
STEP: delete the pod
Jan 22 12:34:44.395: INFO: Waiting for pod pod-8ae37491-3d13-11ea-ad91-0242ac110005 to disappear
Jan 22 12:34:44.415: INFO: Pod pod-8ae37491-3d13-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:34:44.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2l59n" for this suite.
Jan 22 12:34:50.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:34:50.700: INFO: namespace: e2e-tests-emptydir-2l59n, resource: bindings, ignored listing per whitelist
Jan 22 12:34:50.825: INFO: namespace e2e-tests-emptydir-2l59n deletion completed in 6.401664036s

• [SLOW TEST:17.890 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:34:50.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-9582f0f4-3d13-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 22 12:34:51.065: INFO: Waiting up to 5m0s for pod "pod-configmaps-958535bb-3d13-11ea-ad91-0242ac110005" in namespace "e2e-tests-configmap-p8wtc" to be "success or failure"
Jan 22 12:34:51.077: INFO: Pod "pod-configmaps-958535bb-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.433283ms
Jan 22 12:34:53.262: INFO: Pod "pod-configmaps-958535bb-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196184669s
Jan 22 12:34:55.274: INFO: Pod "pod-configmaps-958535bb-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208623082s
Jan 22 12:34:57.293: INFO: Pod "pod-configmaps-958535bb-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.227848471s
Jan 22 12:35:00.107: INFO: Pod "pod-configmaps-958535bb-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.041834757s
Jan 22 12:35:02.729: INFO: Pod "pod-configmaps-958535bb-3d13-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.663255345s
STEP: Saw pod success
Jan 22 12:35:02.729: INFO: Pod "pod-configmaps-958535bb-3d13-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:35:02.747: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-958535bb-3d13-11ea-ad91-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 22 12:35:03.065: INFO: Waiting for pod pod-configmaps-958535bb-3d13-11ea-ad91-0242ac110005 to disappear
Jan 22 12:35:03.199: INFO: Pod pod-configmaps-958535bb-3d13-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:35:03.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-p8wtc" for this suite.
Jan 22 12:35:09.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:35:09.348: INFO: namespace: e2e-tests-configmap-p8wtc, resource: bindings, ignored listing per whitelist
Jan 22 12:35:09.470: INFO: namespace e2e-tests-configmap-p8wtc deletion completed in 6.255011169s

• [SLOW TEST:18.644 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:35:09.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 22 12:35:09.640: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 22 12:35:09.649: INFO: Waiting for terminating namespaces to be deleted...
Jan 22 12:35:09.652: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 22 12:35:09.664: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 22 12:35:09.664: INFO: 	Container weave ready: true, restart count 0
Jan 22 12:35:09.664: INFO: 	Container weave-npc ready: true, restart count 0
Jan 22 12:35:09.664: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 22 12:35:09.664: INFO: 	Container coredns ready: true, restart count 0
Jan 22 12:35:09.664: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 22 12:35:09.664: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 22 12:35:09.664: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 22 12:35:09.664: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 22 12:35:09.664: INFO: 	Container coredns ready: true, restart count 0
Jan 22 12:35:09.664: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 22 12:35:09.664: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 22 12:35:09.664: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ec359b213c9bf1], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:35:10.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-qxmz2" for this suite.
Jan 22 12:35:16.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:35:16.891: INFO: namespace: e2e-tests-sched-pred-qxmz2, resource: bindings, ignored listing per whitelist
Jan 22 12:35:16.955: INFO: namespace e2e-tests-sched-pred-qxmz2 deletion completed in 6.219386691s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.485 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:35:16.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-7ctgb
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-7ctgb
STEP: Deleting pre-stop pod
Jan 22 12:35:40.308: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:35:40.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-7ctgb" for this suite.
Jan 22 12:36:20.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:36:20.731: INFO: namespace: e2e-tests-prestop-7ctgb, resource: bindings, ignored listing per whitelist
Jan 22 12:36:20.858: INFO: namespace e2e-tests-prestop-7ctgb deletion completed in 40.505569766s

• [SLOW TEST:63.902 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:36:20.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan 22 12:36:21.036: INFO: Waiting up to 5m0s for pod "var-expansion-cb22a259-3d13-11ea-ad91-0242ac110005" in namespace "e2e-tests-var-expansion-2xws7" to be "success or failure"
Jan 22 12:36:21.047: INFO: Pod "var-expansion-cb22a259-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.659988ms
Jan 22 12:36:23.060: INFO: Pod "var-expansion-cb22a259-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024758718s
Jan 22 12:36:25.293: INFO: Pod "var-expansion-cb22a259-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257434298s
Jan 22 12:36:27.309: INFO: Pod "var-expansion-cb22a259-3d13-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.273525079s
Jan 22 12:36:29.320: INFO: Pod "var-expansion-cb22a259-3d13-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.284559819s
STEP: Saw pod success
Jan 22 12:36:29.320: INFO: Pod "var-expansion-cb22a259-3d13-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:36:29.327: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-cb22a259-3d13-11ea-ad91-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 22 12:36:30.033: INFO: Waiting for pod var-expansion-cb22a259-3d13-11ea-ad91-0242ac110005 to disappear
Jan 22 12:36:30.425: INFO: Pod var-expansion-cb22a259-3d13-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:36:30.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-2xws7" for this suite.
Jan 22 12:36:36.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:36:36.639: INFO: namespace: e2e-tests-var-expansion-2xws7, resource: bindings, ignored listing per whitelist
Jan 22 12:36:36.742: INFO: namespace e2e-tests-var-expansion-2xws7 deletion completed in 6.289626293s

• [SLOW TEST:15.883 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:36:36.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 22 12:36:47.559: INFO: Successfully updated pod "annotationupdated49ff930-3d13-11ea-ad91-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:36:49.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mlmq5" for this suite.
Jan 22 12:37:13.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:37:13.788: INFO: namespace: e2e-tests-projected-mlmq5, resource: bindings, ignored listing per whitelist
Jan 22 12:37:14.082: INFO: namespace e2e-tests-projected-mlmq5 deletion completed in 24.394316734s

• [SLOW TEST:37.339 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:37:14.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 22 12:37:14.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-f7ttx'
Jan 22 12:37:16.294: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 22 12:37:16.295: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jan 22 12:37:20.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-f7ttx'
Jan 22 12:37:20.923: INFO: stderr: ""
Jan 22 12:37:20.923: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:37:20.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-f7ttx" for this suite.
Jan 22 12:37:26.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:37:27.153: INFO: namespace: e2e-tests-kubectl-f7ttx, resource: bindings, ignored listing per whitelist
Jan 22 12:37:27.173: INFO: namespace e2e-tests-kubectl-f7ttx deletion completed in 6.239012094s

• [SLOW TEST:13.089 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:37:27.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-f2b29d6d-3d13-11ea-ad91-0242ac110005
Jan 22 12:37:27.397: INFO: Pod name my-hostname-basic-f2b29d6d-3d13-11ea-ad91-0242ac110005: Found 0 pods out of 1
Jan 22 12:37:32.676: INFO: Pod name my-hostname-basic-f2b29d6d-3d13-11ea-ad91-0242ac110005: Found 1 pods out of 1
Jan 22 12:37:32.676: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f2b29d6d-3d13-11ea-ad91-0242ac110005" are running
Jan 22 12:37:36.705: INFO: Pod "my-hostname-basic-f2b29d6d-3d13-11ea-ad91-0242ac110005-qhlrb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 12:37:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 12:37:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f2b29d6d-3d13-11ea-ad91-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 12:37:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f2b29d6d-3d13-11ea-ad91-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-22 12:37:27 +0000 UTC Reason: Message:}])
Jan 22 12:37:36.706: INFO: Trying to dial the pod
Jan 22 12:37:41.759: INFO: Controller my-hostname-basic-f2b29d6d-3d13-11ea-ad91-0242ac110005: Got expected result from replica 1 [my-hostname-basic-f2b29d6d-3d13-11ea-ad91-0242ac110005-qhlrb]: "my-hostname-basic-f2b29d6d-3d13-11ea-ad91-0242ac110005-qhlrb", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:37:41.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-cdh54" for this suite.
Jan 22 12:37:50.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:37:50.421: INFO: namespace: e2e-tests-replication-controller-cdh54, resource: bindings, ignored listing per whitelist
Jan 22 12:37:50.489: INFO: namespace e2e-tests-replication-controller-cdh54 deletion completed in 8.720217394s

• [SLOW TEST:23.315 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:37:50.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan 22 12:37:59.504: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:38:25.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-5sd7x" for this suite.
Jan 22 12:38:31.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:38:31.962: INFO: namespace: e2e-tests-namespaces-5sd7x, resource: bindings, ignored listing per whitelist
Jan 22 12:38:31.966: INFO: namespace e2e-tests-namespaces-5sd7x deletion completed in 6.253493597s
STEP: Destroying namespace "e2e-tests-nsdeletetest-7ct8g" for this suite.
Jan 22 12:38:31.970: INFO: Namespace e2e-tests-nsdeletetest-7ct8g was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-9ttv8" for this suite.
Jan 22 12:38:38.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:38:38.098: INFO: namespace: e2e-tests-nsdeletetest-9ttv8, resource: bindings, ignored listing per whitelist
Jan 22 12:38:38.206: INFO: namespace e2e-tests-nsdeletetest-9ttv8 deletion completed in 6.235690414s

• [SLOW TEST:47.717 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:38:38.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 22 12:38:38.469: INFO: Waiting up to 5m0s for pod "pod-1d06e100-3d14-11ea-ad91-0242ac110005" in namespace "e2e-tests-emptydir-ks4k4" to be "success or failure"
Jan 22 12:38:38.479: INFO: Pod "pod-1d06e100-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.379125ms
Jan 22 12:38:41.027: INFO: Pod "pod-1d06e100-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.557879518s
Jan 22 12:38:43.041: INFO: Pod "pod-1d06e100-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571674765s
Jan 22 12:38:45.520: INFO: Pod "pod-1d06e100-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.051061567s
Jan 22 12:38:47.534: INFO: Pod "pod-1d06e100-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.064833212s
Jan 22 12:38:49.551: INFO: Pod "pod-1d06e100-3d14-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.081801613s
STEP: Saw pod success
Jan 22 12:38:49.551: INFO: Pod "pod-1d06e100-3d14-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:38:49.555: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1d06e100-3d14-11ea-ad91-0242ac110005 container test-container: 
STEP: delete the pod
Jan 22 12:38:49.696: INFO: Waiting for pod pod-1d06e100-3d14-11ea-ad91-0242ac110005 to disappear
Jan 22 12:38:49.702: INFO: Pod pod-1d06e100-3d14-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:38:49.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ks4k4" for this suite.
Jan 22 12:38:55.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:38:55.985: INFO: namespace: e2e-tests-emptydir-ks4k4, resource: bindings, ignored listing per whitelist
Jan 22 12:38:56.006: INFO: namespace e2e-tests-emptydir-ks4k4 deletion completed in 6.29275613s

• [SLOW TEST:17.800 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:38:56.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 22 12:38:56.602: INFO: Number of nodes with available pods: 0
Jan 22 12:38:56.602: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:38:57.693: INFO: Number of nodes with available pods: 0
Jan 22 12:38:57.694: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:38:58.623: INFO: Number of nodes with available pods: 0
Jan 22 12:38:58.623: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:38:59.642: INFO: Number of nodes with available pods: 0
Jan 22 12:38:59.642: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:00.626: INFO: Number of nodes with available pods: 0
Jan 22 12:39:00.626: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:01.631: INFO: Number of nodes with available pods: 0
Jan 22 12:39:01.631: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:02.832: INFO: Number of nodes with available pods: 0
Jan 22 12:39:02.832: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:03.675: INFO: Number of nodes with available pods: 0
Jan 22 12:39:03.675: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:04.622: INFO: Number of nodes with available pods: 1
Jan 22 12:39:04.622: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 22 12:39:04.674: INFO: Number of nodes with available pods: 0
Jan 22 12:39:04.674: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:05.691: INFO: Number of nodes with available pods: 0
Jan 22 12:39:05.691: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:06.790: INFO: Number of nodes with available pods: 0
Jan 22 12:39:06.790: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:07.708: INFO: Number of nodes with available pods: 0
Jan 22 12:39:07.708: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:08.695: INFO: Number of nodes with available pods: 0
Jan 22 12:39:08.695: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:09.704: INFO: Number of nodes with available pods: 0
Jan 22 12:39:09.704: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:10.750: INFO: Number of nodes with available pods: 0
Jan 22 12:39:10.750: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:11.720: INFO: Number of nodes with available pods: 0
Jan 22 12:39:11.720: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:12.695: INFO: Number of nodes with available pods: 0
Jan 22 12:39:12.695: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:13.708: INFO: Number of nodes with available pods: 0
Jan 22 12:39:13.708: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:14.695: INFO: Number of nodes with available pods: 0
Jan 22 12:39:14.695: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:15.692: INFO: Number of nodes with available pods: 0
Jan 22 12:39:15.692: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:16.713: INFO: Number of nodes with available pods: 0
Jan 22 12:39:16.713: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:17.711: INFO: Number of nodes with available pods: 0
Jan 22 12:39:17.711: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:18.711: INFO: Number of nodes with available pods: 0
Jan 22 12:39:18.711: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:19.705: INFO: Number of nodes with available pods: 0
Jan 22 12:39:19.705: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:20.696: INFO: Number of nodes with available pods: 0
Jan 22 12:39:20.696: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:21.728: INFO: Number of nodes with available pods: 0
Jan 22 12:39:21.728: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:22.834: INFO: Number of nodes with available pods: 0
Jan 22 12:39:22.834: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:24.498: INFO: Number of nodes with available pods: 0
Jan 22 12:39:24.498: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:24.816: INFO: Number of nodes with available pods: 0
Jan 22 12:39:24.816: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:25.710: INFO: Number of nodes with available pods: 0
Jan 22 12:39:25.710: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:26.774: INFO: Number of nodes with available pods: 0
Jan 22 12:39:26.774: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:27.712: INFO: Number of nodes with available pods: 0
Jan 22 12:39:27.712: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:29.182: INFO: Number of nodes with available pods: 0
Jan 22 12:39:29.182: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:29.792: INFO: Number of nodes with available pods: 0
Jan 22 12:39:29.792: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:30.707: INFO: Number of nodes with available pods: 0
Jan 22 12:39:30.707: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:31.703: INFO: Number of nodes with available pods: 0
Jan 22 12:39:31.703: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 12:39:32.691: INFO: Number of nodes with available pods: 1
Jan 22 12:39:32.691: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6nwkj, will wait for the garbage collector to delete the pods
Jan 22 12:39:32.766: INFO: Deleting DaemonSet.extensions daemon-set took: 13.443539ms
Jan 22 12:39:32.867: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.645781ms
Jan 22 12:39:42.686: INFO: Number of nodes with available pods: 0
Jan 22 12:39:42.686: INFO: Number of running nodes: 0, number of available pods: 0
Jan 22 12:39:42.699: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6nwkj/daemonsets","resourceVersion":"19078337"},"items":null}

Jan 22 12:39:42.708: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6nwkj/pods","resourceVersion":"19078337"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:39:42.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-6nwkj" for this suite.
Jan 22 12:39:50.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:39:50.927: INFO: namespace: e2e-tests-daemonsets-6nwkj, resource: bindings, ignored listing per whitelist
Jan 22 12:39:51.028: INFO: namespace e2e-tests-daemonsets-6nwkj deletion completed in 8.236417097s

• [SLOW TEST:55.022 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:39:51.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan 22 12:40:01.410: INFO: Pod pod-hostip-4881216a-3d14-11ea-ad91-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:40:01.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-6p2mc" for this suite.
Jan 22 12:40:25.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:40:25.634: INFO: namespace: e2e-tests-pods-6p2mc, resource: bindings, ignored listing per whitelist
Jan 22 12:40:25.638: INFO: namespace e2e-tests-pods-6p2mc deletion completed in 24.218798077s

• [SLOW TEST:34.610 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:40:25.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 22 12:40:25.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:40:36.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-cbf4d" for this suite.
Jan 22 12:41:24.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:41:24.786: INFO: namespace: e2e-tests-pods-cbf4d, resource: bindings, ignored listing per whitelist
Jan 22 12:41:24.810: INFO: namespace e2e-tests-pods-cbf4d deletion completed in 48.299907174s

• [SLOW TEST:59.172 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:41:24.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 22 12:41:24.983: INFO: Waiting up to 5m0s for pod "downwardapi-volume-80514b17-3d14-11ea-ad91-0242ac110005" in namespace "e2e-tests-downward-api-vb5gx" to be "success or failure"
Jan 22 12:41:25.046: INFO: Pod "downwardapi-volume-80514b17-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 63.56625ms
Jan 22 12:41:27.070: INFO: Pod "downwardapi-volume-80514b17-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087848952s
Jan 22 12:41:29.107: INFO: Pod "downwardapi-volume-80514b17-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124737974s
Jan 22 12:41:31.126: INFO: Pod "downwardapi-volume-80514b17-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143245783s
Jan 22 12:41:33.139: INFO: Pod "downwardapi-volume-80514b17-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156357211s
Jan 22 12:41:35.194: INFO: Pod "downwardapi-volume-80514b17-3d14-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.211254905s
STEP: Saw pod success
Jan 22 12:41:35.194: INFO: Pod "downwardapi-volume-80514b17-3d14-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:41:35.199: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-80514b17-3d14-11ea-ad91-0242ac110005 container client-container: 
STEP: delete the pod
Jan 22 12:41:35.255: INFO: Waiting for pod downwardapi-volume-80514b17-3d14-11ea-ad91-0242ac110005 to disappear
Jan 22 12:41:35.279: INFO: Pod downwardapi-volume-80514b17-3d14-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:41:35.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vb5gx" for this suite.
Jan 22 12:41:41.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:41:41.506: INFO: namespace: e2e-tests-downward-api-vb5gx, resource: bindings, ignored listing per whitelist
Jan 22 12:41:41.576: INFO: namespace e2e-tests-downward-api-vb5gx deletion completed in 6.221556706s

• [SLOW TEST:16.765 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:41:41.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 22 12:41:41.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a5654f3-3d14-11ea-ad91-0242ac110005" in namespace "e2e-tests-downward-api-krvqp" to be "success or failure"
Jan 22 12:41:41.827: INFO: Pod "downwardapi-volume-8a5654f3-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.934675ms
Jan 22 12:41:44.460: INFO: Pod "downwardapi-volume-8a5654f3-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.654463201s
Jan 22 12:41:46.499: INFO: Pod "downwardapi-volume-8a5654f3-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.694066007s
Jan 22 12:41:48.533: INFO: Pod "downwardapi-volume-8a5654f3-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.728051062s
Jan 22 12:41:50.811: INFO: Pod "downwardapi-volume-8a5654f3-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.005718094s
Jan 22 12:41:53.141: INFO: Pod "downwardapi-volume-8a5654f3-3d14-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.33599968s
STEP: Saw pod success
Jan 22 12:41:53.141: INFO: Pod "downwardapi-volume-8a5654f3-3d14-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:41:53.155: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8a5654f3-3d14-11ea-ad91-0242ac110005 container client-container: 
STEP: delete the pod
Jan 22 12:41:53.361: INFO: Waiting for pod downwardapi-volume-8a5654f3-3d14-11ea-ad91-0242ac110005 to disappear
Jan 22 12:41:53.373: INFO: Pod downwardapi-volume-8a5654f3-3d14-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:41:53.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-krvqp" for this suite.
Jan 22 12:42:01.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:42:01.694: INFO: namespace: e2e-tests-downward-api-krvqp, resource: bindings, ignored listing per whitelist
Jan 22 12:42:01.703: INFO: namespace e2e-tests-downward-api-krvqp deletion completed in 8.305799529s

• [SLOW TEST:20.127 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:42:01.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 22 12:42:01.953: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan 22 12:42:01.959: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-dspqz/daemonsets","resourceVersion":"19078619"},"items":null}

Jan 22 12:42:01.962: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-dspqz/pods","resourceVersion":"19078619"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:42:01.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-dspqz" for this suite.
Jan 22 12:42:08.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:42:08.182: INFO: namespace: e2e-tests-daemonsets-dspqz, resource: bindings, ignored listing per whitelist
Jan 22 12:42:08.190: INFO: namespace e2e-tests-daemonsets-dspqz deletion completed in 6.218659839s

S [SKIPPING] [6.487 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan 22 12:42:01.953: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:42:08.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 22 12:42:08.321: INFO: Creating deployment "test-recreate-deployment"
Jan 22 12:42:08.418: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 22 12:42:08.433: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan 22 12:42:10.463: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 22 12:42:10.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 22 12:42:12.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 22 12:42:14.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 22 12:42:16.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715293728, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 22 12:42:18.504: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 22 12:42:18.534: INFO: Updating deployment test-recreate-deployment
Jan 22 12:42:18.535: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 22 12:42:19.182: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-8gm6p,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8gm6p/deployments/test-recreate-deployment,UID:9a2902f4-3d14-11ea-a994-fa163e34d433,ResourceVersion:19078689,Generation:2,CreationTimestamp:2020-01-22 12:42:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-22 12:42:18 +0000 UTC 2020-01-22 12:42:18 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-22 12:42:19 +0000 UTC 2020-01-22 12:42:08 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 22 12:42:19.197: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-8gm6p,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8gm6p/replicasets/test-recreate-deployment-589c4bfd,UID:a076bcd8-3d14-11ea-a994-fa163e34d433,ResourceVersion:19078687,Generation:1,CreationTimestamp:2020-01-22 12:42:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9a2902f4-3d14-11ea-a994-fa163e34d433 0xc00299ccdf 0xc00299ccf0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 22 12:42:19.197: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 22 12:42:19.198: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-8gm6p,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8gm6p/replicasets/test-recreate-deployment-5bf7f65dc,UID:9a3c6dbe-3d14-11ea-a994-fa163e34d433,ResourceVersion:19078677,Generation:2,CreationTimestamp:2020-01-22 12:42:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9a2902f4-3d14-11ea-a994-fa163e34d433 0xc00299cdb0 0xc00299cdb1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 22 12:42:19.207: INFO: Pod "test-recreate-deployment-589c4bfd-m48hc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-m48hc,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-8gm6p,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8gm6p/pods/test-recreate-deployment-589c4bfd-m48hc,UID:a078ffa8-3d14-11ea-a994-fa163e34d433,ResourceVersion:19078690,Generation:0,CreationTimestamp:2020-01-22 12:42:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd a076bcd8-3d14-11ea-a994-fa163e34d433 0xc0029904ef 0xc002990580}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w89qq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w89qq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-w89qq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029905e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002990600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:42:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:42:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:42:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 12:42:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-22 12:42:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:42:19.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-8gm6p" for this suite.
Jan 22 12:42:29.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:42:29.329: INFO: namespace: e2e-tests-deployment-8gm6p, resource: bindings, ignored listing per whitelist
Jan 22 12:42:29.402: INFO: namespace e2e-tests-deployment-8gm6p deletion completed in 10.186242054s

• [SLOW TEST:21.212 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:42:29.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 22 12:42:29.700: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6e33611-3d14-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-t7j7x" to be "success or failure"
Jan 22 12:42:29.706: INFO: Pod "downwardapi-volume-a6e33611-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.845541ms
Jan 22 12:42:31.724: INFO: Pod "downwardapi-volume-a6e33611-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024216193s
Jan 22 12:42:33.758: INFO: Pod "downwardapi-volume-a6e33611-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057975564s
Jan 22 12:42:35.773: INFO: Pod "downwardapi-volume-a6e33611-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072834529s
Jan 22 12:42:37.789: INFO: Pod "downwardapi-volume-a6e33611-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089323786s
Jan 22 12:42:39.804: INFO: Pod "downwardapi-volume-a6e33611-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.103914163s
Jan 22 12:42:42.103: INFO: Pod "downwardapi-volume-a6e33611-3d14-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.402745764s
STEP: Saw pod success
Jan 22 12:42:42.103: INFO: Pod "downwardapi-volume-a6e33611-3d14-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:42:42.113: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a6e33611-3d14-11ea-ad91-0242ac110005 container client-container: 
STEP: delete the pod
Jan 22 12:42:42.325: INFO: Waiting for pod downwardapi-volume-a6e33611-3d14-11ea-ad91-0242ac110005 to disappear
Jan 22 12:42:42.340: INFO: Pod downwardapi-volume-a6e33611-3d14-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:42:42.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t7j7x" for this suite.
Jan 22 12:42:48.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:42:48.649: INFO: namespace: e2e-tests-projected-t7j7x, resource: bindings, ignored listing per whitelist
Jan 22 12:42:48.690: INFO: namespace e2e-tests-projected-t7j7x deletion completed in 6.333756372s

• [SLOW TEST:19.288 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:42:48.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 22 12:42:49.054: INFO: Waiting up to 5m0s for pod "pod-b25b6f19-3d14-11ea-ad91-0242ac110005" in namespace "e2e-tests-emptydir-kkgwd" to be "success or failure"
Jan 22 12:42:49.060: INFO: Pod "pod-b25b6f19-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159051ms
Jan 22 12:42:51.077: INFO: Pod "pod-b25b6f19-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022864368s
Jan 22 12:42:53.106: INFO: Pod "pod-b25b6f19-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052322033s
Jan 22 12:42:55.178: INFO: Pod "pod-b25b6f19-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124287637s
Jan 22 12:42:57.217: INFO: Pod "pod-b25b6f19-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.16332851s
Jan 22 12:42:59.344: INFO: Pod "pod-b25b6f19-3d14-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.290066251s
STEP: Saw pod success
Jan 22 12:42:59.344: INFO: Pod "pod-b25b6f19-3d14-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:42:59.355: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b25b6f19-3d14-11ea-ad91-0242ac110005 container test-container: 
STEP: delete the pod
Jan 22 12:42:59.500: INFO: Waiting for pod pod-b25b6f19-3d14-11ea-ad91-0242ac110005 to disappear
Jan 22 12:42:59.518: INFO: Pod pod-b25b6f19-3d14-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:42:59.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kkgwd" for this suite.
Jan 22 12:43:05.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:43:05.602: INFO: namespace: e2e-tests-emptydir-kkgwd, resource: bindings, ignored listing per whitelist
Jan 22 12:43:05.699: INFO: namespace e2e-tests-emptydir-kkgwd deletion completed in 6.169858972s

• [SLOW TEST:17.009 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:43:05.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 22 12:43:24.158: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:24.177: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 22 12:43:26.177: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:26.191: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 22 12:43:28.177: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:28.195: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 22 12:43:30.177: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:30.192: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 22 12:43:32.177: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:32.192: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 22 12:43:34.177: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:34.195: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 22 12:43:36.177: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:36.204: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 22 12:43:38.177: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:38.189: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 22 12:43:40.177: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:40.241: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 22 12:43:42.177: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:42.197: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 22 12:43:44.177: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:44.190: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 22 12:43:46.177: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:46.207: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 22 12:43:48.177: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:48.192: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 22 12:43:50.177: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:50.190: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 22 12:43:52.177: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:52.189: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 22 12:43:54.177: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 22 12:43:54.189: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:43:54.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-cc6jr" for this suite.
Jan 22 12:44:18.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:44:18.324: INFO: namespace: e2e-tests-container-lifecycle-hook-cc6jr, resource: bindings, ignored listing per whitelist
Jan 22 12:44:18.411: INFO: namespace e2e-tests-container-lifecycle-hook-cc6jr deletion completed in 24.182871867s

• [SLOW TEST:72.712 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:44:18.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-e7d5b87e-3d14-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 22 12:44:18.675: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e7d719ef-3d14-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-tgsc5" to be "success or failure"
Jan 22 12:44:18.681: INFO: Pod "pod-projected-configmaps-e7d719ef-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190154ms
Jan 22 12:44:20.698: INFO: Pod "pod-projected-configmaps-e7d719ef-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023318725s
Jan 22 12:44:22.711: INFO: Pod "pod-projected-configmaps-e7d719ef-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035898924s
Jan 22 12:44:24.724: INFO: Pod "pod-projected-configmaps-e7d719ef-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049047149s
Jan 22 12:44:26.883: INFO: Pod "pod-projected-configmaps-e7d719ef-3d14-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207746029s
Jan 22 12:44:28.923: INFO: Pod "pod-projected-configmaps-e7d719ef-3d14-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.248178544s
STEP: Saw pod success
Jan 22 12:44:28.923: INFO: Pod "pod-projected-configmaps-e7d719ef-3d14-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:44:28.932: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e7d719ef-3d14-11ea-ad91-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 22 12:44:29.066: INFO: Waiting for pod pod-projected-configmaps-e7d719ef-3d14-11ea-ad91-0242ac110005 to disappear
Jan 22 12:44:29.077: INFO: Pod pod-projected-configmaps-e7d719ef-3d14-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:44:29.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tgsc5" for this suite.
Jan 22 12:44:35.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:44:35.233: INFO: namespace: e2e-tests-projected-tgsc5, resource: bindings, ignored listing per whitelist
Jan 22 12:44:35.276: INFO: namespace e2e-tests-projected-tgsc5 deletion completed in 6.191105668s

• [SLOW TEST:16.865 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:44:35.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jan 22 12:44:35.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-c26lb'
Jan 22 12:44:35.896: INFO: stderr: ""
Jan 22 12:44:35.897: INFO: stdout: "pod/pause created\n"
Jan 22 12:44:35.897: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 22 12:44:35.897: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-c26lb" to be "running and ready"
Jan 22 12:44:36.013: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 116.02503ms
Jan 22 12:44:38.026: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129080822s
Jan 22 12:44:40.040: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143223812s
Jan 22 12:44:43.024: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.127566757s
Jan 22 12:44:45.042: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.14541364s
Jan 22 12:44:47.065: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 11.167839577s
Jan 22 12:44:47.065: INFO: Pod "pause" satisfied condition "running and ready"
Jan 22 12:44:47.065: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 22 12:44:47.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-c26lb'
Jan 22 12:44:47.326: INFO: stderr: ""
Jan 22 12:44:47.326: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 22 12:44:47.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-c26lb'
Jan 22 12:44:47.479: INFO: stderr: ""
Jan 22 12:44:47.479: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          12s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 22 12:44:47.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-c26lb'
Jan 22 12:44:47.678: INFO: stderr: ""
Jan 22 12:44:47.678: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 22 12:44:47.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-c26lb'
Jan 22 12:44:47.808: INFO: stderr: ""
Jan 22 12:44:47.808: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          12s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jan 22 12:44:47.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-c26lb'
Jan 22 12:44:47.966: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 22 12:44:47.967: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 22 12:44:47.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-c26lb'
Jan 22 12:44:48.216: INFO: stderr: "No resources found.\n"
Jan 22 12:44:48.216: INFO: stdout: ""
Jan 22 12:44:48.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-c26lb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 22 12:44:48.339: INFO: stderr: ""
Jan 22 12:44:48.339: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:44:48.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-c26lb" for this suite.
Jan 22 12:44:54.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:44:54.461: INFO: namespace: e2e-tests-kubectl-c26lb, resource: bindings, ignored listing per whitelist
Jan 22 12:44:54.649: INFO: namespace e2e-tests-kubectl-c26lb deletion completed in 6.296854563s

• [SLOW TEST:19.372 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:44:54.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-zd6pp
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zd6pp to expose endpoints map[]
Jan 22 12:44:54.965: INFO: Get endpoints failed (62.08867ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 22 12:44:55.970: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zd6pp exposes endpoints map[] (1.066953629s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-zd6pp
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zd6pp to expose endpoints map[pod1:[100]]
Jan 22 12:45:01.626: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.481623724s elapsed, will retry)
Jan 22 12:45:04.722: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zd6pp exposes endpoints map[pod1:[100]] (8.578211635s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-zd6pp
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zd6pp to expose endpoints map[pod1:[100] pod2:[101]]
Jan 22 12:45:09.669: INFO: Unexpected endpoints: found map[fe16c76b-3d14-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.929593044s elapsed, will retry)
Jan 22 12:45:12.992: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zd6pp exposes endpoints map[pod1:[100] pod2:[101]] (8.252632749s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-zd6pp
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zd6pp to expose endpoints map[pod2:[101]]
Jan 22 12:45:14.168: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zd6pp exposes endpoints map[pod2:[101]] (1.162301802s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-zd6pp
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zd6pp to expose endpoints map[]
Jan 22 12:45:15.358: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zd6pp exposes endpoints map[] (1.175844315s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:45:16.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-zd6pp" for this suite.
Jan 22 12:45:41.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:45:41.408: INFO: namespace: e2e-tests-services-zd6pp, resource: bindings, ignored listing per whitelist
Jan 22 12:45:41.416: INFO: namespace e2e-tests-services-zd6pp deletion completed in 24.341069856s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:46.765 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:45:41.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-1940b86d-3d15-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 22 12:45:41.569: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-19417791-3d15-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-44rrk" to be "success or failure"
Jan 22 12:45:41.622: INFO: Pod "pod-projected-secrets-19417791-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 53.040132ms
Jan 22 12:45:43.642: INFO: Pod "pod-projected-secrets-19417791-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073403671s
Jan 22 12:45:45.971: INFO: Pod "pod-projected-secrets-19417791-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.402004097s
Jan 22 12:45:48.170: INFO: Pod "pod-projected-secrets-19417791-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.600830401s
Jan 22 12:45:50.202: INFO: Pod "pod-projected-secrets-19417791-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.633038281s
Jan 22 12:45:52.217: INFO: Pod "pod-projected-secrets-19417791-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.647928683s
Jan 22 12:45:54.239: INFO: Pod "pod-projected-secrets-19417791-3d15-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.670320749s
STEP: Saw pod success
Jan 22 12:45:54.239: INFO: Pod "pod-projected-secrets-19417791-3d15-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:45:54.250: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-19417791-3d15-11ea-ad91-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 22 12:45:55.251: INFO: Waiting for pod pod-projected-secrets-19417791-3d15-11ea-ad91-0242ac110005 to disappear
Jan 22 12:45:55.304: INFO: Pod pod-projected-secrets-19417791-3d15-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:45:55.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-44rrk" for this suite.
Jan 22 12:46:01.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:46:02.139: INFO: namespace: e2e-tests-projected-44rrk, resource: bindings, ignored listing per whitelist
Jan 22 12:46:02.784: INFO: namespace e2e-tests-projected-44rrk deletion completed in 7.318338533s

• [SLOW TEST:21.368 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:46:02.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 22 12:46:02.925: INFO: Waiting up to 5m0s for pod "pod-25fd7096-3d15-11ea-ad91-0242ac110005" in namespace "e2e-tests-emptydir-wbm74" to be "success or failure"
Jan 22 12:46:02.943: INFO: Pod "pod-25fd7096-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.26526ms
Jan 22 12:46:04.963: INFO: Pod "pod-25fd7096-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037536896s
Jan 22 12:46:07.074: INFO: Pod "pod-25fd7096-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148051786s
Jan 22 12:46:09.106: INFO: Pod "pod-25fd7096-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180799444s
Jan 22 12:46:11.146: INFO: Pod "pod-25fd7096-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.220593775s
Jan 22 12:46:13.205: INFO: Pod "pod-25fd7096-3d15-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.279916748s
STEP: Saw pod success
Jan 22 12:46:13.206: INFO: Pod "pod-25fd7096-3d15-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:46:13.234: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-25fd7096-3d15-11ea-ad91-0242ac110005 container test-container: 
STEP: delete the pod
Jan 22 12:46:13.555: INFO: Waiting for pod pod-25fd7096-3d15-11ea-ad91-0242ac110005 to disappear
Jan 22 12:46:13.561: INFO: Pod pod-25fd7096-3d15-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:46:13.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wbm74" for this suite.
Jan 22 12:46:19.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:46:19.674: INFO: namespace: e2e-tests-emptydir-wbm74, resource: bindings, ignored listing per whitelist
Jan 22 12:46:19.788: INFO: namespace e2e-tests-emptydir-wbm74 deletion completed in 6.223373782s

• [SLOW TEST:17.004 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:46:19.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-3027e010-3d15-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 22 12:46:19.990: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-30287821-3d15-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-v28fb" to be "success or failure"
Jan 22 12:46:20.014: INFO: Pod "pod-projected-secrets-30287821-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.051244ms
Jan 22 12:46:22.280: INFO: Pod "pod-projected-secrets-30287821-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289732748s
Jan 22 12:46:24.300: INFO: Pod "pod-projected-secrets-30287821-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310028473s
Jan 22 12:46:26.495: INFO: Pod "pod-projected-secrets-30287821-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.505330558s
Jan 22 12:46:28.521: INFO: Pod "pod-projected-secrets-30287821-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.530995477s
Jan 22 12:46:30.569: INFO: Pod "pod-projected-secrets-30287821-3d15-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.579022223s
STEP: Saw pod success
Jan 22 12:46:30.569: INFO: Pod "pod-projected-secrets-30287821-3d15-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:46:30.582: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-30287821-3d15-11ea-ad91-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 22 12:46:30.799: INFO: Waiting for pod pod-projected-secrets-30287821-3d15-11ea-ad91-0242ac110005 to disappear
Jan 22 12:46:30.814: INFO: Pod pod-projected-secrets-30287821-3d15-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:46:30.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v28fb" for this suite.
Jan 22 12:46:36.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:46:37.055: INFO: namespace: e2e-tests-projected-v28fb, resource: bindings, ignored listing per whitelist
Jan 22 12:46:37.165: INFO: namespace e2e-tests-projected-v28fb deletion completed in 6.339369334s

• [SLOW TEST:17.376 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:46:37.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-3a925eab-3d15-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 22 12:46:37.528: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a93868d-3d15-11ea-ad91-0242ac110005" in namespace "e2e-tests-configmap-t64sp" to be "success or failure"
Jan 22 12:46:37.772: INFO: Pod "pod-configmaps-3a93868d-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 243.183547ms
Jan 22 12:46:39.795: INFO: Pod "pod-configmaps-3a93868d-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.266002966s
Jan 22 12:46:41.813: INFO: Pod "pod-configmaps-3a93868d-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.284054548s
Jan 22 12:46:44.032: INFO: Pod "pod-configmaps-3a93868d-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.503629429s
Jan 22 12:46:46.094: INFO: Pod "pod-configmaps-3a93868d-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.56525663s
Jan 22 12:46:48.448: INFO: Pod "pod-configmaps-3a93868d-3d15-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.918973692s
STEP: Saw pod success
Jan 22 12:46:48.448: INFO: Pod "pod-configmaps-3a93868d-3d15-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:46:48.460: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3a93868d-3d15-11ea-ad91-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 22 12:46:48.941: INFO: Waiting for pod pod-configmaps-3a93868d-3d15-11ea-ad91-0242ac110005 to disappear
Jan 22 12:46:48.970: INFO: Pod pod-configmaps-3a93868d-3d15-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:46:48.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-t64sp" for this suite.
Jan 22 12:46:57.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:46:57.269: INFO: namespace: e2e-tests-configmap-t64sp, resource: bindings, ignored listing per whitelist
Jan 22 12:46:57.294: INFO: namespace e2e-tests-configmap-t64sp deletion completed in 8.312392672s

• [SLOW TEST:20.129 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:46:57.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 22 12:46:57.437: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7v2s2,SelfLink:/api/v1/namespaces/e2e-tests-watch-7v2s2/configmaps/e2e-watch-test-configmap-a,UID:467b5984-3d15-11ea-a994-fa163e34d433,ResourceVersion:19079338,Generation:0,CreationTimestamp:2020-01-22 12:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 22 12:46:57.437: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7v2s2,SelfLink:/api/v1/namespaces/e2e-tests-watch-7v2s2/configmaps/e2e-watch-test-configmap-a,UID:467b5984-3d15-11ea-a994-fa163e34d433,ResourceVersion:19079338,Generation:0,CreationTimestamp:2020-01-22 12:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 22 12:47:07.465: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7v2s2,SelfLink:/api/v1/namespaces/e2e-tests-watch-7v2s2/configmaps/e2e-watch-test-configmap-a,UID:467b5984-3d15-11ea-a994-fa163e34d433,ResourceVersion:19079351,Generation:0,CreationTimestamp:2020-01-22 12:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 22 12:47:07.465: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7v2s2,SelfLink:/api/v1/namespaces/e2e-tests-watch-7v2s2/configmaps/e2e-watch-test-configmap-a,UID:467b5984-3d15-11ea-a994-fa163e34d433,ResourceVersion:19079351,Generation:0,CreationTimestamp:2020-01-22 12:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 22 12:47:17.497: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7v2s2,SelfLink:/api/v1/namespaces/e2e-tests-watch-7v2s2/configmaps/e2e-watch-test-configmap-a,UID:467b5984-3d15-11ea-a994-fa163e34d433,ResourceVersion:19079364,Generation:0,CreationTimestamp:2020-01-22 12:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 22 12:47:17.497: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7v2s2,SelfLink:/api/v1/namespaces/e2e-tests-watch-7v2s2/configmaps/e2e-watch-test-configmap-a,UID:467b5984-3d15-11ea-a994-fa163e34d433,ResourceVersion:19079364,Generation:0,CreationTimestamp:2020-01-22 12:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 22 12:47:27.522: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7v2s2,SelfLink:/api/v1/namespaces/e2e-tests-watch-7v2s2/configmaps/e2e-watch-test-configmap-a,UID:467b5984-3d15-11ea-a994-fa163e34d433,ResourceVersion:19079376,Generation:0,CreationTimestamp:2020-01-22 12:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 22 12:47:27.523: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7v2s2,SelfLink:/api/v1/namespaces/e2e-tests-watch-7v2s2/configmaps/e2e-watch-test-configmap-a,UID:467b5984-3d15-11ea-a994-fa163e34d433,ResourceVersion:19079376,Generation:0,CreationTimestamp:2020-01-22 12:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 22 12:47:37.541: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7v2s2,SelfLink:/api/v1/namespaces/e2e-tests-watch-7v2s2/configmaps/e2e-watch-test-configmap-b,UID:5e616a8a-3d15-11ea-a994-fa163e34d433,ResourceVersion:19079389,Generation:0,CreationTimestamp:2020-01-22 12:47:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 22 12:47:37.542: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7v2s2,SelfLink:/api/v1/namespaces/e2e-tests-watch-7v2s2/configmaps/e2e-watch-test-configmap-b,UID:5e616a8a-3d15-11ea-a994-fa163e34d433,ResourceVersion:19079389,Generation:0,CreationTimestamp:2020-01-22 12:47:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 22 12:47:47.564: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7v2s2,SelfLink:/api/v1/namespaces/e2e-tests-watch-7v2s2/configmaps/e2e-watch-test-configmap-b,UID:5e616a8a-3d15-11ea-a994-fa163e34d433,ResourceVersion:19079402,Generation:0,CreationTimestamp:2020-01-22 12:47:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 22 12:47:47.564: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7v2s2,SelfLink:/api/v1/namespaces/e2e-tests-watch-7v2s2/configmaps/e2e-watch-test-configmap-b,UID:5e616a8a-3d15-11ea-a994-fa163e34d433,ResourceVersion:19079402,Generation:0,CreationTimestamp:2020-01-22 12:47:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:47:57.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-7v2s2" for this suite.
Jan 22 12:48:05.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:48:05.708: INFO: namespace: e2e-tests-watch-7v2s2, resource: bindings, ignored listing per whitelist
Jan 22 12:48:05.844: INFO: namespace e2e-tests-watch-7v2s2 deletion completed in 8.269376923s

• [SLOW TEST:68.550 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:48:05.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan 22 12:48:06.074: INFO: Waiting up to 5m0s for pod "client-containers-6f63c65b-3d15-11ea-ad91-0242ac110005" in namespace "e2e-tests-containers-4mhnc" to be "success or failure"
Jan 22 12:48:06.116: INFO: Pod "client-containers-6f63c65b-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.223446ms
Jan 22 12:48:08.154: INFO: Pod "client-containers-6f63c65b-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080593866s
Jan 22 12:48:10.177: INFO: Pod "client-containers-6f63c65b-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102884262s
Jan 22 12:48:12.186: INFO: Pod "client-containers-6f63c65b-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112681594s
Jan 22 12:48:14.277: INFO: Pod "client-containers-6f63c65b-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.202740987s
Jan 22 12:48:16.412: INFO: Pod "client-containers-6f63c65b-3d15-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.338151321s
STEP: Saw pod success
Jan 22 12:48:16.412: INFO: Pod "client-containers-6f63c65b-3d15-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:48:16.427: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-6f63c65b-3d15-11ea-ad91-0242ac110005 container test-container: 
STEP: delete the pod
Jan 22 12:48:16.852: INFO: Waiting for pod client-containers-6f63c65b-3d15-11ea-ad91-0242ac110005 to disappear
Jan 22 12:48:16.876: INFO: Pod client-containers-6f63c65b-3d15-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:48:16.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-4mhnc" for this suite.
Jan 22 12:48:25.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:48:25.118: INFO: namespace: e2e-tests-containers-4mhnc, resource: bindings, ignored listing per whitelist
Jan 22 12:48:25.203: INFO: namespace e2e-tests-containers-4mhnc deletion completed in 8.313887712s

• [SLOW TEST:19.359 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:48:25.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 22 12:48:25.590: INFO: Waiting up to 5m0s for pod "downward-api-7aff820a-3d15-11ea-ad91-0242ac110005" in namespace "e2e-tests-downward-api-dqxk8" to be "success or failure"
Jan 22 12:48:25.670: INFO: Pod "downward-api-7aff820a-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 80.034038ms
Jan 22 12:48:27.683: INFO: Pod "downward-api-7aff820a-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09275911s
Jan 22 12:48:29.694: INFO: Pod "downward-api-7aff820a-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10452438s
Jan 22 12:48:31.758: INFO: Pod "downward-api-7aff820a-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167825367s
Jan 22 12:48:33.781: INFO: Pod "downward-api-7aff820a-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.191250964s
Jan 22 12:48:36.093: INFO: Pod "downward-api-7aff820a-3d15-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.503331527s
STEP: Saw pod success
Jan 22 12:48:36.093: INFO: Pod "downward-api-7aff820a-3d15-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:48:36.104: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7aff820a-3d15-11ea-ad91-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 22 12:48:36.392: INFO: Waiting for pod downward-api-7aff820a-3d15-11ea-ad91-0242ac110005 to disappear
Jan 22 12:48:36.399: INFO: Pod downward-api-7aff820a-3d15-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:48:36.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-dqxk8" for this suite.
Jan 22 12:48:42.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:48:42.652: INFO: namespace: e2e-tests-downward-api-dqxk8, resource: bindings, ignored listing per whitelist
Jan 22 12:48:42.669: INFO: namespace e2e-tests-downward-api-dqxk8 deletion completed in 6.260830582s

• [SLOW TEST:17.465 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:48:42.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-s7g9
STEP: Creating a pod to test atomic-volume-subpath
Jan 22 12:48:42.804: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-s7g9" in namespace "e2e-tests-subpath-s9gft" to be "success or failure"
Jan 22 12:48:42.878: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Pending", Reason="", readiness=false. Elapsed: 73.9743ms
Jan 22 12:48:44.900: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096021049s
Jan 22 12:48:47.000: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195534145s
Jan 22 12:48:49.528: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.723990592s
Jan 22 12:48:51.568: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.764276708s
Jan 22 12:48:53.581: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.777101187s
Jan 22 12:48:55.601: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.797264213s
Jan 22 12:48:57.617: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.81251697s
Jan 22 12:48:59.648: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.843984701s
Jan 22 12:49:01.670: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Running", Reason="", readiness=false. Elapsed: 18.86621362s
Jan 22 12:49:03.687: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Running", Reason="", readiness=false. Elapsed: 20.882984116s
Jan 22 12:49:05.703: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Running", Reason="", readiness=false. Elapsed: 22.898997662s
Jan 22 12:49:07.719: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Running", Reason="", readiness=false. Elapsed: 24.914668249s
Jan 22 12:49:09.737: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Running", Reason="", readiness=false. Elapsed: 26.93326651s
Jan 22 12:49:11.756: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Running", Reason="", readiness=false. Elapsed: 28.951665461s
Jan 22 12:49:13.782: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Running", Reason="", readiness=false. Elapsed: 30.97745026s
Jan 22 12:49:15.802: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Running", Reason="", readiness=false. Elapsed: 32.997477543s
Jan 22 12:49:17.837: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Running", Reason="", readiness=false. Elapsed: 35.032484646s
Jan 22 12:49:20.086: INFO: Pod "pod-subpath-test-configmap-s7g9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.281547844s
STEP: Saw pod success
Jan 22 12:49:20.086: INFO: Pod "pod-subpath-test-configmap-s7g9" satisfied condition "success or failure"
Jan 22 12:49:20.102: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-s7g9 container test-container-subpath-configmap-s7g9: 
STEP: delete the pod
Jan 22 12:49:20.563: INFO: Waiting for pod pod-subpath-test-configmap-s7g9 to disappear
Jan 22 12:49:20.690: INFO: Pod pod-subpath-test-configmap-s7g9 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-s7g9
Jan 22 12:49:20.690: INFO: Deleting pod "pod-subpath-test-configmap-s7g9" in namespace "e2e-tests-subpath-s9gft"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:49:20.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-s9gft" for this suite.
Jan 22 12:49:26.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:49:26.866: INFO: namespace: e2e-tests-subpath-s9gft, resource: bindings, ignored listing per whitelist
Jan 22 12:49:27.006: INFO: namespace e2e-tests-subpath-s9gft deletion completed in 6.29374333s

• [SLOW TEST:44.337 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:49:27.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 22 12:49:51.538: INFO: Container started at 2020-01-22 12:49:35 +0000 UTC, pod became ready at 2020-01-22 12:49:51 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:49:51.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-whppf" for this suite.
Jan 22 12:50:13.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:50:13.659: INFO: namespace: e2e-tests-container-probe-whppf, resource: bindings, ignored listing per whitelist
Jan 22 12:50:13.742: INFO: namespace e2e-tests-container-probe-whppf deletion completed in 22.194230809s

• [SLOW TEST:46.735 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:50:13.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 22 12:50:14.096: INFO: Waiting up to 5m0s for pod "downward-api-bbab58d9-3d15-11ea-ad91-0242ac110005" in namespace "e2e-tests-downward-api-hmfwb" to be "success or failure"
Jan 22 12:50:14.145: INFO: Pod "downward-api-bbab58d9-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 49.046262ms
Jan 22 12:50:16.161: INFO: Pod "downward-api-bbab58d9-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064970761s
Jan 22 12:50:18.180: INFO: Pod "downward-api-bbab58d9-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084075827s
Jan 22 12:50:20.232: INFO: Pod "downward-api-bbab58d9-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136003742s
Jan 22 12:50:22.254: INFO: Pod "downward-api-bbab58d9-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157745149s
Jan 22 12:50:24.278: INFO: Pod "downward-api-bbab58d9-3d15-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.181889899s
STEP: Saw pod success
Jan 22 12:50:24.278: INFO: Pod "downward-api-bbab58d9-3d15-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:50:24.286: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-bbab58d9-3d15-11ea-ad91-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 22 12:50:24.480: INFO: Waiting for pod downward-api-bbab58d9-3d15-11ea-ad91-0242ac110005 to disappear
Jan 22 12:50:24.506: INFO: Pod downward-api-bbab58d9-3d15-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:50:24.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hmfwb" for this suite.
Jan 22 12:50:30.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:50:30.734: INFO: namespace: e2e-tests-downward-api-hmfwb, resource: bindings, ignored listing per whitelist
Jan 22 12:50:30.764: INFO: namespace e2e-tests-downward-api-hmfwb deletion completed in 6.238824326s

• [SLOW TEST:17.021 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:50:30.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 22 12:50:31.076: INFO: Waiting up to 5m0s for pod "pod-c5cfb42f-3d15-11ea-ad91-0242ac110005" in namespace "e2e-tests-emptydir-wzw99" to be "success or failure"
Jan 22 12:50:31.094: INFO: Pod "pod-c5cfb42f-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.549594ms
Jan 22 12:50:33.483: INFO: Pod "pod-c5cfb42f-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.406854192s
Jan 22 12:50:35.510: INFO: Pod "pod-c5cfb42f-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433764112s
Jan 22 12:50:37.738: INFO: Pod "pod-c5cfb42f-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.662117204s
Jan 22 12:50:39.754: INFO: Pod "pod-c5cfb42f-3d15-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.678566166s
Jan 22 12:50:41.842: INFO: Pod "pod-c5cfb42f-3d15-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.766463956s
STEP: Saw pod success
Jan 22 12:50:41.843: INFO: Pod "pod-c5cfb42f-3d15-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:50:41.868: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c5cfb42f-3d15-11ea-ad91-0242ac110005 container test-container: 
STEP: delete the pod
Jan 22 12:50:42.163: INFO: Waiting for pod pod-c5cfb42f-3d15-11ea-ad91-0242ac110005 to disappear
Jan 22 12:50:42.177: INFO: Pod pod-c5cfb42f-3d15-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:50:42.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wzw99" for this suite.
Jan 22 12:50:48.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:50:48.292: INFO: namespace: e2e-tests-emptydir-wzw99, resource: bindings, ignored listing per whitelist
Jan 22 12:50:48.432: INFO: namespace e2e-tests-emptydir-wzw99 deletion completed in 6.240459789s

• [SLOW TEST:17.668 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:50:48.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 22 12:50:48.911: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:51:10.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-t67hc" for this suite.
Jan 22 12:51:34.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:51:34.333: INFO: namespace: e2e-tests-init-container-t67hc, resource: bindings, ignored listing per whitelist
Jan 22 12:51:34.385: INFO: namespace e2e-tests-init-container-t67hc deletion completed in 24.270016138s

• [SLOW TEST:45.953 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:51:34.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-7sxrn
Jan 22 12:51:44.732: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-7sxrn
STEP: checking the pod's current state and verifying that restartCount is present
Jan 22 12:51:44.739: INFO: Initial restart count of pod liveness-exec is 0
Jan 22 12:52:37.214: INFO: Restart count of pod e2e-tests-container-probe-7sxrn/liveness-exec is now 1 (52.474869613s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:52:37.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-7sxrn" for this suite.
Jan 22 12:52:43.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:52:43.569: INFO: namespace: e2e-tests-container-probe-7sxrn, resource: bindings, ignored listing per whitelist
Jan 22 12:52:43.613: INFO: namespace e2e-tests-container-probe-7sxrn deletion completed in 6.250280055s

• [SLOW TEST:69.228 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:52:43.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan 22 12:52:43.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 22 12:52:44.075: INFO: stderr: ""
Jan 22 12:52:44.075: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:52:44.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4sq82" for this suite.
Jan 22 12:52:50.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:52:50.188: INFO: namespace: e2e-tests-kubectl-4sq82, resource: bindings, ignored listing per whitelist
Jan 22 12:52:50.326: INFO: namespace e2e-tests-kubectl-4sq82 deletion completed in 6.230961376s

• [SLOW TEST:6.713 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:52:50.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-7cxj
STEP: Creating a pod to test atomic-volume-subpath
Jan 22 12:52:50.709: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7cxj" in namespace "e2e-tests-subpath-sf6sj" to be "success or failure"
Jan 22 12:52:50.721: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.334965ms
Jan 22 12:52:52.752: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043393851s
Jan 22 12:52:54.772: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063251621s
Jan 22 12:52:56.787: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077704861s
Jan 22 12:52:58.820: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110724654s
Jan 22 12:53:00.833: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.124022068s
Jan 22 12:53:03.266: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.556967581s
Jan 22 12:53:05.756: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Pending", Reason="", readiness=false. Elapsed: 15.047411034s
Jan 22 12:53:07.777: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Running", Reason="", readiness=false. Elapsed: 17.067944864s
Jan 22 12:53:09.797: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Running", Reason="", readiness=false. Elapsed: 19.087640839s
Jan 22 12:53:11.842: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Running", Reason="", readiness=false. Elapsed: 21.132635266s
Jan 22 12:53:13.889: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Running", Reason="", readiness=false. Elapsed: 23.179634801s
Jan 22 12:53:15.904: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Running", Reason="", readiness=false. Elapsed: 25.195519385s
Jan 22 12:53:17.926: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Running", Reason="", readiness=false. Elapsed: 27.216823282s
Jan 22 12:53:20.006: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Running", Reason="", readiness=false. Elapsed: 29.297454485s
Jan 22 12:53:22.035: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Running", Reason="", readiness=false. Elapsed: 31.325905548s
Jan 22 12:53:24.056: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Running", Reason="", readiness=false. Elapsed: 33.347535067s
Jan 22 12:53:26.083: INFO: Pod "pod-subpath-test-downwardapi-7cxj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.374304425s
STEP: Saw pod success
Jan 22 12:53:26.083: INFO: Pod "pod-subpath-test-downwardapi-7cxj" satisfied condition "success or failure"
Jan 22 12:53:26.092: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-7cxj container test-container-subpath-downwardapi-7cxj: 
STEP: delete the pod
Jan 22 12:53:26.468: INFO: Waiting for pod pod-subpath-test-downwardapi-7cxj to disappear
Jan 22 12:53:26.618: INFO: Pod pod-subpath-test-downwardapi-7cxj no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-7cxj
Jan 22 12:53:26.618: INFO: Deleting pod "pod-subpath-test-downwardapi-7cxj" in namespace "e2e-tests-subpath-sf6sj"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:53:26.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-sf6sj" for this suite.
Jan 22 12:53:32.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:53:32.921: INFO: namespace: e2e-tests-subpath-sf6sj, resource: bindings, ignored listing per whitelist
Jan 22 12:53:32.982: INFO: namespace e2e-tests-subpath-sf6sj deletion completed in 6.31076805s

• [SLOW TEST:42.655 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:53:32.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-325a63bc-3d16-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 22 12:53:33.180: INFO: Waiting up to 5m0s for pod "pod-secrets-325b286a-3d16-11ea-ad91-0242ac110005" in namespace "e2e-tests-secrets-5zsqj" to be "success or failure"
Jan 22 12:53:33.188: INFO: Pod "pod-secrets-325b286a-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.922512ms
Jan 22 12:53:35.234: INFO: Pod "pod-secrets-325b286a-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053878661s
Jan 22 12:53:37.252: INFO: Pod "pod-secrets-325b286a-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071460012s
Jan 22 12:53:39.301: INFO: Pod "pod-secrets-325b286a-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120333657s
Jan 22 12:53:41.598: INFO: Pod "pod-secrets-325b286a-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.417178466s
Jan 22 12:53:43.626: INFO: Pod "pod-secrets-325b286a-3d16-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.445022463s
STEP: Saw pod success
Jan 22 12:53:43.626: INFO: Pod "pod-secrets-325b286a-3d16-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:53:43.647: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-325b286a-3d16-11ea-ad91-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 22 12:53:44.314: INFO: Waiting for pod pod-secrets-325b286a-3d16-11ea-ad91-0242ac110005 to disappear
Jan 22 12:53:44.330: INFO: Pod pod-secrets-325b286a-3d16-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:53:44.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5zsqj" for this suite.
Jan 22 12:53:50.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:53:50.720: INFO: namespace: e2e-tests-secrets-5zsqj, resource: bindings, ignored listing per whitelist
Jan 22 12:53:50.755: INFO: namespace e2e-tests-secrets-5zsqj deletion completed in 6.417211165s

• [SLOW TEST:17.773 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:53:50.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 22 12:53:50.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-dm5mp'
Jan 22 12:53:52.671: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 22 12:53:52.671: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 22 12:53:52.756: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan 22 12:53:52.823: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 22 12:53:52.964: INFO: scanned /root for discovery docs: 
Jan 22 12:53:52.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-dm5mp'
Jan 22 12:54:20.149: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 22 12:54:20.149: INFO: stdout: "Created e2e-test-nginx-rc-70396bd249fbeb5a7a63a93692e25920\nScaling up e2e-test-nginx-rc-70396bd249fbeb5a7a63a93692e25920 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-70396bd249fbeb5a7a63a93692e25920 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-70396bd249fbeb5a7a63a93692e25920 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 22 12:54:20.149: INFO: stdout: "Created e2e-test-nginx-rc-70396bd249fbeb5a7a63a93692e25920\nScaling up e2e-test-nginx-rc-70396bd249fbeb5a7a63a93692e25920 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-70396bd249fbeb5a7a63a93692e25920 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-70396bd249fbeb5a7a63a93692e25920 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 22 12:54:20.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dm5mp'
Jan 22 12:54:20.287: INFO: stderr: ""
Jan 22 12:54:20.287: INFO: stdout: "e2e-test-nginx-rc-70396bd249fbeb5a7a63a93692e25920-kst2l e2e-test-nginx-rc-l6ps5 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 22 12:54:25.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dm5mp'
Jan 22 12:54:25.490: INFO: stderr: ""
Jan 22 12:54:25.490: INFO: stdout: "e2e-test-nginx-rc-70396bd249fbeb5a7a63a93692e25920-kst2l "
Jan 22 12:54:25.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-70396bd249fbeb5a7a63a93692e25920-kst2l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dm5mp'
Jan 22 12:54:25.627: INFO: stderr: ""
Jan 22 12:54:25.627: INFO: stdout: "true"
Jan 22 12:54:25.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-70396bd249fbeb5a7a63a93692e25920-kst2l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dm5mp'
Jan 22 12:54:25.751: INFO: stderr: ""
Jan 22 12:54:25.751: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 22 12:54:25.751: INFO: e2e-test-nginx-rc-70396bd249fbeb5a7a63a93692e25920-kst2l is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan 22 12:54:25.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dm5mp'
Jan 22 12:54:25.926: INFO: stderr: ""
Jan 22 12:54:25.927: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:54:25.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dm5mp" for this suite.
Jan 22 12:54:59.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:55:00.080: INFO: namespace: e2e-tests-kubectl-dm5mp, resource: bindings, ignored listing per whitelist
Jan 22 12:55:00.208: INFO: namespace e2e-tests-kubectl-dm5mp deletion completed in 34.274963895s

• [SLOW TEST:69.452 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:55:00.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 22 12:55:15.010: INFO: Waiting up to 5m0s for pod "client-envvars-6f0ba5b4-3d16-11ea-ad91-0242ac110005" in namespace "e2e-tests-pods-pclkh" to be "success or failure"
Jan 22 12:55:15.244: INFO: Pod "client-envvars-6f0ba5b4-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 234.133929ms
Jan 22 12:55:19.548: INFO: Pod "client-envvars-6f0ba5b4-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.538024374s
Jan 22 12:55:21.560: INFO: Pod "client-envvars-6f0ba5b4-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.550181819s
Jan 22 12:55:23.606: INFO: Pod "client-envvars-6f0ba5b4-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.595937976s
Jan 22 12:55:26.194: INFO: Pod "client-envvars-6f0ba5b4-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.184127546s
Jan 22 12:55:28.223: INFO: Pod "client-envvars-6f0ba5b4-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.213323256s
Jan 22 12:55:31.177: INFO: Pod "client-envvars-6f0ba5b4-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.167400903s
Jan 22 12:55:33.202: INFO: Pod "client-envvars-6f0ba5b4-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.192058166s
Jan 22 12:55:35.219: INFO: Pod "client-envvars-6f0ba5b4-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.209432606s
Jan 22 12:55:37.248: INFO: Pod "client-envvars-6f0ba5b4-3d16-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.238097393s
STEP: Saw pod success
Jan 22 12:55:37.248: INFO: Pod "client-envvars-6f0ba5b4-3d16-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:55:37.257: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-6f0ba5b4-3d16-11ea-ad91-0242ac110005 container env3cont: 
STEP: delete the pod
Jan 22 12:55:38.119: INFO: Waiting for pod client-envvars-6f0ba5b4-3d16-11ea-ad91-0242ac110005 to disappear
Jan 22 12:55:38.423: INFO: Pod client-envvars-6f0ba5b4-3d16-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:55:38.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-pclkh" for this suite.
Jan 22 12:56:20.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:56:20.678: INFO: namespace: e2e-tests-pods-pclkh, resource: bindings, ignored listing per whitelist
Jan 22 12:56:20.707: INFO: namespace e2e-tests-pods-pclkh deletion completed in 42.27559387s

• [SLOW TEST:80.498 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:56:20.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 22 12:56:20.862: INFO: Waiting up to 5m0s for pod "downwardapi-volume-964e2d1b-3d16-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-pljqw" to be "success or failure"
Jan 22 12:56:21.011: INFO: Pod "downwardapi-volume-964e2d1b-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 149.070949ms
Jan 22 12:56:23.063: INFO: Pod "downwardapi-volume-964e2d1b-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201313853s
Jan 22 12:56:25.128: INFO: Pod "downwardapi-volume-964e2d1b-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.266230893s
Jan 22 12:56:27.670: INFO: Pod "downwardapi-volume-964e2d1b-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.808233998s
Jan 22 12:56:29.686: INFO: Pod "downwardapi-volume-964e2d1b-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.824011671s
Jan 22 12:56:31.706: INFO: Pod "downwardapi-volume-964e2d1b-3d16-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.844073559s
STEP: Saw pod success
Jan 22 12:56:31.706: INFO: Pod "downwardapi-volume-964e2d1b-3d16-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:56:31.717: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-964e2d1b-3d16-11ea-ad91-0242ac110005 container client-container: 
STEP: delete the pod
Jan 22 12:56:32.093: INFO: Waiting for pod downwardapi-volume-964e2d1b-3d16-11ea-ad91-0242ac110005 to disappear
Jan 22 12:56:32.981: INFO: Pod downwardapi-volume-964e2d1b-3d16-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:56:32.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pljqw" for this suite.
Jan 22 12:56:39.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:56:39.852: INFO: namespace: e2e-tests-projected-pljqw, resource: bindings, ignored listing per whitelist
Jan 22 12:56:39.885: INFO: namespace e2e-tests-projected-pljqw deletion completed in 6.889748708s

• [SLOW TEST:19.178 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:56:39.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 22 12:56:40.160: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:56:41.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-k7t4x" for this suite.
Jan 22 12:56:47.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:56:47.449: INFO: namespace: e2e-tests-custom-resource-definition-k7t4x, resource: bindings, ignored listing per whitelist
Jan 22 12:56:47.461: INFO: namespace e2e-tests-custom-resource-definition-k7t4x deletion completed in 6.186296998s

• [SLOW TEST:7.576 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:56:47.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 22 12:57:10.003: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 22 12:57:10.026: INFO: Pod pod-with-poststart-http-hook still exists
Jan 22 12:57:12.026: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 22 12:57:12.041: INFO: Pod pod-with-poststart-http-hook still exists
Jan 22 12:57:14.026: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 22 12:57:14.180: INFO: Pod pod-with-poststart-http-hook still exists
Jan 22 12:57:16.026: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 22 12:57:16.047: INFO: Pod pod-with-poststart-http-hook still exists
Jan 22 12:57:18.026: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 22 12:57:18.044: INFO: Pod pod-with-poststart-http-hook still exists
Jan 22 12:57:20.026: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 22 12:57:20.048: INFO: Pod pod-with-poststart-http-hook still exists
Jan 22 12:57:22.026: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 22 12:57:22.047: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:57:22.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-lkq2x" for this suite.
Jan 22 12:57:46.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:57:46.428: INFO: namespace: e2e-tests-container-lifecycle-hook-lkq2x, resource: bindings, ignored listing per whitelist
Jan 22 12:57:46.473: INFO: namespace e2e-tests-container-lifecycle-hook-lkq2x deletion completed in 24.418100844s

• [SLOW TEST:59.011 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:57:46.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 22 12:57:46.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan 22 12:57:47.136: INFO: stderr: ""
Jan 22 12:57:47.136: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan 22 12:57:47.144: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:57:47.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tgrr7" for this suite.
Jan 22 12:57:53.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:57:53.241: INFO: namespace: e2e-tests-kubectl-tgrr7, resource: bindings, ignored listing per whitelist
Jan 22 12:57:53.337: INFO: namespace e2e-tests-kubectl-tgrr7 deletion completed in 6.181854546s

S [SKIPPING] [6.864 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Jan 22 12:57:47.144: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:57:53.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-cd9adee2-3d16-11ea-ad91-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-cd9adea3-3d16-11ea-ad91-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 22 12:57:53.741: INFO: Waiting up to 5m0s for pod "projected-volume-cd9ade11-3d16-11ea-ad91-0242ac110005" in namespace "e2e-tests-projected-6fgcr" to be "success or failure"
Jan 22 12:57:53.759: INFO: Pod "projected-volume-cd9ade11-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.929545ms
Jan 22 12:57:55.782: INFO: Pod "projected-volume-cd9ade11-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041242723s
Jan 22 12:57:57.808: INFO: Pod "projected-volume-cd9ade11-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06773052s
Jan 22 12:57:59.834: INFO: Pod "projected-volume-cd9ade11-3d16-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093542314s
Jan 22 12:58:02.070: INFO: Pod "projected-volume-cd9ade11-3d16-11ea-ad91-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.329546378s
Jan 22 12:58:04.084: INFO: Pod "projected-volume-cd9ade11-3d16-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.34329336s
STEP: Saw pod success
Jan 22 12:58:04.084: INFO: Pod "projected-volume-cd9ade11-3d16-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 12:58:04.090: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-cd9ade11-3d16-11ea-ad91-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Jan 22 12:58:04.178: INFO: Waiting for pod projected-volume-cd9ade11-3d16-11ea-ad91-0242ac110005 to disappear
Jan 22 12:58:04.185: INFO: Pod projected-volume-cd9ade11-3d16-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 12:58:04.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6fgcr" for this suite.
Jan 22 12:58:10.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 12:58:10.386: INFO: namespace: e2e-tests-projected-6fgcr, resource: bindings, ignored listing per whitelist
Jan 22 12:58:10.423: INFO: namespace e2e-tests-projected-6fgcr deletion completed in 6.225892252s

• [SLOW TEST:17.085 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 12:58:10.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 22 13:01:16.209: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 22 13:01:16.328: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 22 13:01:18.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 22 13:01:18.348: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 22 13:01:20.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 22 13:01:20.341: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 22 13:01:22.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 22 13:01:22.343: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 22 13:01:24.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 22 13:01:24.346: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 22 13:01:26.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 22 13:01:26.352: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 22 13:01:28.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 22 13:01:28.342: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 22 13:01:30.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 22 13:01:30.347: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 22 13:01:32.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 22 13:01:32.340: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 22 13:01:34.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 22 13:01:34.349: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 22 13:01:36.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 22 13:01:36.517: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 22 13:01:38.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 22 13:01:38.345: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 22 13:01:40.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 22 13:01:40.344: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 22 13:01:42.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 22 13:01:42.342: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 22 13:01:44.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 22 13:01:44.348: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 13:01:44.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-hqqv6" for this suite.
Jan 22 13:02:08.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 13:02:08.528: INFO: namespace: e2e-tests-container-lifecycle-hook-hqqv6, resource: bindings, ignored listing per whitelist
Jan 22 13:02:08.629: INFO: namespace e2e-tests-container-lifecycle-hook-hqqv6 deletion completed in 24.27382331s

• [SLOW TEST:238.207 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 13:02:08.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 22 13:02:08.766: INFO: Creating deployment "nginx-deployment"
Jan 22 13:02:08.788: INFO: Waiting for observed generation 1
Jan 22 13:02:11.385: INFO: Waiting for all required pods to come up
Jan 22 13:02:12.442: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 22 13:03:00.837: INFO: Waiting for deployment "nginx-deployment" to complete
Jan 22 13:03:00.851: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan 22 13:03:00.868: INFO: Updating deployment nginx-deployment
Jan 22 13:03:00.868: INFO: Waiting for observed generation 2
Jan 22 13:03:04.550: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 22 13:03:05.316: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 22 13:03:05.335: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 22 13:03:05.886: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 22 13:03:05.887: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 22 13:03:06.334: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 22 13:03:06.943: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan 22 13:03:06.943: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan 22 13:03:08.958: INFO: Updating deployment nginx-deployment
Jan 22 13:03:08.958: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan 22 13:03:09.541: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 22 13:03:10.367: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 22 13:03:13.456: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qrsv9/deployments/nginx-deployment,UID:65ae0899-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081312,Generation:3,CreationTimestamp:2020-01-22 13:02:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-22 13:03:06 +0000 UTC 2020-01-22 13:02:08 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-01-22 13:03:09 +0000 UTC 2020-01-22 13:03:09 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan 22 13:03:14.763: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qrsv9/replicasets/nginx-deployment-5c98f8fb5,UID:84bd89ad-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081317,Generation:3,CreationTimestamp:2020-01-22 13:03:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 65ae0899-3d17-11ea-a994-fa163e34d433 0xc0009a6cd7 0xc0009a6cd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 22 13:03:14.763: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan 22 13:03:14.764: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qrsv9/replicasets/nginx-deployment-85ddf47c5d,UID:65cc6aff-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081306,Generation:3,CreationTimestamp:2020-01-22 13:02:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 65ae0899-3d17-11ea-a994-fa163e34d433 0xc0009a6db7 0xc0009a6db8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan 22 13:03:14.850: INFO: Pod "nginx-deployment-5c98f8fb5-6xzhm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6xzhm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-5c98f8fb5-6xzhm,UID:84df093f-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081226,Generation:0,CreationTimestamp:2020-01-22 13:03:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 84bd89ad-3d17-11ea-a994-fa163e34d433 0xc0009a7a10 0xc0009a7a11}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0009a7a80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0009a7aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-22 13:03:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.852: INFO: Pod "nginx-deployment-5c98f8fb5-77gr2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-77gr2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-5c98f8fb5-77gr2,UID:8533cf39-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081234,Generation:0,CreationTimestamp:2020-01-22 13:03:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 84bd89ad-3d17-11ea-a994-fa163e34d433 0xc0009a7b77 0xc0009a7b78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0009a7c30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0009a7c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-22 13:03:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.853: INFO: Pod "nginx-deployment-5c98f8fb5-8l66g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8l66g,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-5c98f8fb5-8l66g,UID:8a6bbdcd-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081292,Generation:0,CreationTimestamp:2020-01-22 13:03:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 84bd89ad-3d17-11ea-a994-fa163e34d433 0xc0009a7d37 0xc0009a7d38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0009a7da0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0009a7dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.855: INFO: Pod "nginx-deployment-5c98f8fb5-ckx5b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ckx5b,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-5c98f8fb5-ckx5b,UID:84df09b4-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081239,Generation:0,CreationTimestamp:2020-01-22 13:03:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 84bd89ad-3d17-11ea-a994-fa163e34d433 0xc0009a7e87 0xc0009a7e88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0009a7f30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0009a7f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-22 13:03:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.855: INFO: Pod "nginx-deployment-5c98f8fb5-frt6l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-frt6l,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-5c98f8fb5-frt6l,UID:8a6b417b-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081303,Generation:0,CreationTimestamp:2020-01-22 13:03:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 84bd89ad-3d17-11ea-a994-fa163e34d433 0xc00221a367 0xc00221a368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00221a3d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00221a3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.856: INFO: Pod "nginx-deployment-5c98f8fb5-hmskh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hmskh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-5c98f8fb5-hmskh,UID:855e3750-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081244,Generation:0,CreationTimestamp:2020-01-22 13:03:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 84bd89ad-3d17-11ea-a994-fa163e34d433 0xc00221a4d7 0xc00221a4d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00221a540} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00221a560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:02 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-22 13:03:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.857: INFO: Pod "nginx-deployment-5c98f8fb5-jp9c4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jp9c4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-5c98f8fb5-jp9c4,UID:8a6bde4e-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081294,Generation:0,CreationTimestamp:2020-01-22 13:03:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 84bd89ad-3d17-11ea-a994-fa163e34d433 0xc00221a627 0xc00221a628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00221a790} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00221a7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.858: INFO: Pod "nginx-deployment-5c98f8fb5-l4w4t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l4w4t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-5c98f8fb5-l4w4t,UID:8a2638f5-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081288,Generation:0,CreationTimestamp:2020-01-22 13:03:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 84bd89ad-3d17-11ea-a994-fa163e34d433 0xc00221a827 0xc00221a828}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00221ac30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00221aca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.859: INFO: Pod "nginx-deployment-5c98f8fb5-mzt2k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mzt2k,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-5c98f8fb5-mzt2k,UID:84da2556-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081213,Generation:0,CreationTimestamp:2020-01-22 13:03:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 84bd89ad-3d17-11ea-a994-fa163e34d433 0xc00221ad27 0xc00221ad28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00221ada0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00221adc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-22 13:03:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.859: INFO: Pod "nginx-deployment-5c98f8fb5-ng8zv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ng8zv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-5c98f8fb5-ng8zv,UID:8a25fb4d-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081280,Generation:0,CreationTimestamp:2020-01-22 13:03:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 84bd89ad-3d17-11ea-a994-fa163e34d433 0xc00221b967 0xc00221b968}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00221b9d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00221b9f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.860: INFO: Pod "nginx-deployment-5c98f8fb5-rbgw6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rbgw6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-5c98f8fb5-rbgw6,UID:8a6c928c-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081297,Generation:0,CreationTimestamp:2020-01-22 13:03:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 84bd89ad-3d17-11ea-a994-fa163e34d433 0xc00221bb37 0xc00221bb38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00221bba0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00221bbc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.861: INFO: Pod "nginx-deployment-5c98f8fb5-v2k4q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-v2k4q,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-5c98f8fb5-v2k4q,UID:8a90e612-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081307,Generation:0,CreationTimestamp:2020-01-22 13:03:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 84bd89ad-3d17-11ea-a994-fa163e34d433 0xc00221bc37 0xc00221bc38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00221bd20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00221bd50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.861: INFO: Pod "nginx-deployment-5c98f8fb5-xzqj8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xzqj8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-5c98f8fb5-xzqj8,UID:89e5ad91-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081319,Generation:0,CreationTimestamp:2020-01-22 13:03:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 84bd89ad-3d17-11ea-a994-fa163e34d433 0xc00221bdd7 0xc00221bdd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00221be40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00221be60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-22 13:03:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.862: INFO: Pod "nginx-deployment-85ddf47c5d-5sdkn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5sdkn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-5sdkn,UID:65e2250e-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081163,Generation:0,CreationTimestamp:2020-01-22 13:02:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc00221bf27 0xc00221bf28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00221bf90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00221bfb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-22 13:02:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 13:02:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2bd61c34d79b4eb920248f1738ac94c6deee284a77d08de0bc546fb16a476f5d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.863: INFO: Pod "nginx-deployment-85ddf47c5d-64ghj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-64ghj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-64ghj,UID:65ea1bae-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081159,Generation:0,CreationTimestamp:2020-01-22 13:02:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca0157 0xc001ca0158}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca0330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca0350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-01-22 13:02:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 13:02:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://dc84d673fd7d80eec0743cd824fd750e8b28ab7f597b883bbacb9c806d90e401}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.864: INFO: Pod "nginx-deployment-85ddf47c5d-6g8qx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6g8qx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-6g8qx,UID:65d7c5c9-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081158,Generation:0,CreationTimestamp:2020-01-22 13:02:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca0537 0xc001ca0538}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca05a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca05c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-22 13:02:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 13:02:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6c139420b259c0bb4ce6b7f0a9c9d9c869b996aa6b8f8c4650d4280149b114c8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.865: INFO: Pod "nginx-deployment-85ddf47c5d-8hn99" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8hn99,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-8hn99,UID:65ea4207-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081178,Generation:0,CreationTimestamp:2020-01-22 13:02:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca0707 0xc001ca0708}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca0770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca0790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-22 13:02:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 13:02:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e653f90b1c39f8feac98c5aa45a4c2fd57956abc4bf564cb9728f227cfd02d13}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.865: INFO: Pod "nginx-deployment-85ddf47c5d-c6s42" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-c6s42,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-c6s42,UID:8a262ebf-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081287,Generation:0,CreationTimestamp:2020-01-22 13:03:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca0907 0xc001ca0908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca0970} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca0990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.866: INFO: Pod "nginx-deployment-85ddf47c5d-dj4kh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dj4kh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-dj4kh,UID:66178fe7-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081170,Generation:0,CreationTimestamp:2020-01-22 13:02:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca0a07 0xc001ca0a08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca0a70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca0a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:15 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-22 13:02:15 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 13:02:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://30dc13229e199d4fc96290a9d05fad2053fc4746038209bd94482e3b643c317e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.866: INFO: Pod "nginx-deployment-85ddf47c5d-hf5rg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hf5rg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-hf5rg,UID:8a705983-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081296,Generation:0,CreationTimestamp:2020-01-22 13:03:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca0b57 0xc001ca0b58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca0be0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca0c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.867: INFO: Pod "nginx-deployment-85ddf47c5d-hxql5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hxql5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-hxql5,UID:8a6fcb85-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081299,Generation:0,CreationTimestamp:2020-01-22 13:03:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca0c77 0xc001ca0c78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca0ce0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca0e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.867: INFO: Pod "nginx-deployment-85ddf47c5d-kszjx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kszjx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-kszjx,UID:8a26031f-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081286,Generation:0,CreationTimestamp:2020-01-22 13:03:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca0ed7 0xc001ca0ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca0f50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca0f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.868: INFO: Pod "nginx-deployment-85ddf47c5d-mxjnp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mxjnp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-mxjnp,UID:8a26226f-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081282,Generation:0,CreationTimestamp:2020-01-22 13:03:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca1007 0xc001ca1008}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca11b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca11d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.869: INFO: Pod "nginx-deployment-85ddf47c5d-nmj8f" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nmj8f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-nmj8f,UID:661736e7-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081167,Generation:0,CreationTimestamp:2020-01-22 13:02:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca1257 0xc001ca1258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca12f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca1310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-22 13:02:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 13:02:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://02a8cc6ff6a929ee17de3bb40cd949d58691c1306929367f969595eb662495e3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.869: INFO: Pod "nginx-deployment-85ddf47c5d-phqk9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-phqk9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-phqk9,UID:89e71fa2-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081273,Generation:0,CreationTimestamp:2020-01-22 13:03:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca13d7 0xc001ca13d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca1440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca1470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.870: INFO: Pod "nginx-deployment-85ddf47c5d-pnxf7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pnxf7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-pnxf7,UID:89e721fb-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081308,Generation:0,CreationTimestamp:2020-01-22 13:03:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca1527 0xc001ca1528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca1590} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca15b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-22 13:03:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.872: INFO: Pod "nginx-deployment-85ddf47c5d-ptswk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ptswk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-ptswk,UID:661900bf-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081175,Generation:0,CreationTimestamp:2020-01-22 13:02:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca1697 0xc001ca1698}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca1700} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca1720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-01-22 13:02:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 13:02:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://956e01930346e8a7d80c699c9ea5ff1a3707dc624caddd703d086bc207f4b358}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.872: INFO: Pod "nginx-deployment-85ddf47c5d-sj5k6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sj5k6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-sj5k6,UID:8999edde-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081291,Generation:0,CreationTimestamp:2020-01-22 13:03:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca1867 0xc001ca1868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca18d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca18f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-22 13:03:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.873: INFO: Pod "nginx-deployment-85ddf47c5d-wddqw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wddqw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-wddqw,UID:65ea4791-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081160,Generation:0,CreationTimestamp:2020-01-22 13:02:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca19a7 0xc001ca19a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca1a80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca1aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:14 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:02:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-22 13:02:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-22 13:02:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f7c6a3f2d6cbd6f2af9e88062c2415d1473a263aa4bb2148ef3ea4eb1b04c4da}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.873: INFO: Pod "nginx-deployment-85ddf47c5d-x242m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x242m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-x242m,UID:8a702b2f-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081295,Generation:0,CreationTimestamp:2020-01-22 13:03:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca1b67 0xc001ca1b68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca1cf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca1d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.874: INFO: Pod "nginx-deployment-85ddf47c5d-x59kv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x59kv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-x59kv,UID:8a706971-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081300,Generation:0,CreationTimestamp:2020-01-22 13:03:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca1d87 0xc001ca1d88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca1df0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca1e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.874: INFO: Pod "nginx-deployment-85ddf47c5d-x96wc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x96wc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-x96wc,UID:8a7036b0-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081298,Generation:0,CreationTimestamp:2020-01-22 13:03:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca1e87 0xc001ca1e88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca1ef0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ca1f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 22 13:03:14.875: INFO: Pod "nginx-deployment-85ddf47c5d-zbv2m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zbv2m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-qrsv9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qrsv9/pods/nginx-deployment-85ddf47c5d-zbv2m,UID:8a261009-3d17-11ea-a994-fa163e34d433,ResourceVersion:19081289,Generation:0,CreationTimestamp:2020-01-22 13:03:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 65cc6aff-3d17-11ea-a994-fa163e34d433 0xc001ca1f87 0xc001ca1f88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgmfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgmfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vgmfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ca1ff0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025b4060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-22 13:03:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 13:03:14.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-qrsv9" for this suite.
Jan 22 13:04:52.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 13:04:53.421: INFO: namespace: e2e-tests-deployment-qrsv9, resource: bindings, ignored listing per whitelist
Jan 22 13:04:53.504: INFO: namespace e2e-tests-deployment-qrsv9 deletion completed in 1m37.509768833s

• [SLOW TEST:164.874 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 13:04:53.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 22 13:04:53.889: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 22 13:04:53.930: INFO: Number of nodes with available pods: 0
Jan 22 13:04:53.930: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 22 13:04:54.142: INFO: Number of nodes with available pods: 0
Jan 22 13:04:54.142: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:04:56.525: INFO: Number of nodes with available pods: 0
Jan 22 13:04:56.525: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:04:57.377: INFO: Number of nodes with available pods: 0
Jan 22 13:04:57.377: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:04:58.161: INFO: Number of nodes with available pods: 0
Jan 22 13:04:58.161: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:04:59.162: INFO: Number of nodes with available pods: 0
Jan 22 13:04:59.162: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:00.160: INFO: Number of nodes with available pods: 0
Jan 22 13:05:00.161: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:02.823: INFO: Number of nodes with available pods: 0
Jan 22 13:05:02.823: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:03.182: INFO: Number of nodes with available pods: 0
Jan 22 13:05:03.182: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:04.181: INFO: Number of nodes with available pods: 0
Jan 22 13:05:04.181: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:05.163: INFO: Number of nodes with available pods: 0
Jan 22 13:05:05.163: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:06.158: INFO: Number of nodes with available pods: 0
Jan 22 13:05:06.158: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:07.167: INFO: Number of nodes with available pods: 1
Jan 22 13:05:07.167: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 22 13:05:07.344: INFO: Number of nodes with available pods: 1
Jan 22 13:05:07.344: INFO: Number of running nodes: 0, number of available pods: 1
Jan 22 13:05:08.358: INFO: Number of nodes with available pods: 0
Jan 22 13:05:08.358: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 22 13:05:08.413: INFO: Number of nodes with available pods: 0
Jan 22 13:05:08.413: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:09.425: INFO: Number of nodes with available pods: 0
Jan 22 13:05:09.425: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:10.432: INFO: Number of nodes with available pods: 0
Jan 22 13:05:10.432: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:11.465: INFO: Number of nodes with available pods: 0
Jan 22 13:05:11.465: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:12.428: INFO: Number of nodes with available pods: 0
Jan 22 13:05:12.428: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:13.614: INFO: Number of nodes with available pods: 0
Jan 22 13:05:13.615: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:14.437: INFO: Number of nodes with available pods: 0
Jan 22 13:05:14.437: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:15.422: INFO: Number of nodes with available pods: 0
Jan 22 13:05:15.422: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:16.429: INFO: Number of nodes with available pods: 0
Jan 22 13:05:16.430: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:17.614: INFO: Number of nodes with available pods: 0
Jan 22 13:05:17.614: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:18.426: INFO: Number of nodes with available pods: 0
Jan 22 13:05:18.426: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:19.429: INFO: Number of nodes with available pods: 0
Jan 22 13:05:19.429: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:20.431: INFO: Number of nodes with available pods: 0
Jan 22 13:05:20.431: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:21.439: INFO: Number of nodes with available pods: 0
Jan 22 13:05:21.439: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:22.455: INFO: Number of nodes with available pods: 0
Jan 22 13:05:22.455: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:23.426: INFO: Number of nodes with available pods: 0
Jan 22 13:05:23.426: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:24.581: INFO: Number of nodes with available pods: 0
Jan 22 13:05:24.581: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:25.702: INFO: Number of nodes with available pods: 0
Jan 22 13:05:25.702: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:26.434: INFO: Number of nodes with available pods: 0
Jan 22 13:05:26.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:27.427: INFO: Number of nodes with available pods: 0
Jan 22 13:05:27.427: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:28.425: INFO: Number of nodes with available pods: 0
Jan 22 13:05:28.425: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:29.435: INFO: Number of nodes with available pods: 0
Jan 22 13:05:29.436: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:30.500: INFO: Number of nodes with available pods: 0
Jan 22 13:05:30.500: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:31.423: INFO: Number of nodes with available pods: 0
Jan 22 13:05:31.423: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:32.454: INFO: Number of nodes with available pods: 0
Jan 22 13:05:32.454: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:33.442: INFO: Number of nodes with available pods: 0
Jan 22 13:05:33.442: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 22 13:05:34.433: INFO: Number of nodes with available pods: 1
Jan 22 13:05:34.433: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-2jvqn, will wait for the garbage collector to delete the pods
Jan 22 13:05:34.568: INFO: Deleting DaemonSet.extensions daemon-set took: 46.798454ms
Jan 22 13:05:34.768: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.647519ms
Jan 22 13:05:52.859: INFO: Number of nodes with available pods: 0
Jan 22 13:05:52.859: INFO: Number of running nodes: 0, number of available pods: 0
Jan 22 13:05:52.865: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2jvqn/daemonsets","resourceVersion":"19081745"},"items":null}

Jan 22 13:05:52.869: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2jvqn/pods","resourceVersion":"19081745"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 13:05:52.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-2jvqn" for this suite.
Jan 22 13:06:01.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 13:06:01.042: INFO: namespace: e2e-tests-daemonsets-2jvqn, resource: bindings, ignored listing per whitelist
Jan 22 13:06:01.159: INFO: namespace e2e-tests-daemonsets-2jvqn deletion completed in 8.24364728s

• [SLOW TEST:67.655 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 13:06:01.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 22 13:06:01.308: INFO: namespace e2e-tests-kubectl-n2kmb
Jan 22 13:06:01.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-n2kmb'
Jan 22 13:06:04.358: INFO: stderr: ""
Jan 22 13:06:04.358: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 22 13:06:06.207: INFO: Selector matched 1 pods for map[app:redis]
Jan 22 13:06:06.208: INFO: Found 0 / 1
Jan 22 13:06:06.716: INFO: Selector matched 1 pods for map[app:redis]
Jan 22 13:06:06.716: INFO: Found 0 / 1
Jan 22 13:06:07.431: INFO: Selector matched 1 pods for map[app:redis]
Jan 22 13:06:07.431: INFO: Found 0 / 1
Jan 22 13:06:08.392: INFO: Selector matched 1 pods for map[app:redis]
Jan 22 13:06:08.392: INFO: Found 0 / 1
Jan 22 13:06:09.374: INFO: Selector matched 1 pods for map[app:redis]
Jan 22 13:06:09.374: INFO: Found 0 / 1
Jan 22 13:06:10.810: INFO: Selector matched 1 pods for map[app:redis]
Jan 22 13:06:10.810: INFO: Found 0 / 1
Jan 22 13:06:11.687: INFO: Selector matched 1 pods for map[app:redis]
Jan 22 13:06:11.687: INFO: Found 0 / 1
Jan 22 13:06:12.383: INFO: Selector matched 1 pods for map[app:redis]
Jan 22 13:06:12.383: INFO: Found 0 / 1
Jan 22 13:06:13.376: INFO: Selector matched 1 pods for map[app:redis]
Jan 22 13:06:13.376: INFO: Found 0 / 1
Jan 22 13:06:14.581: INFO: Selector matched 1 pods for map[app:redis]
Jan 22 13:06:14.581: INFO: Found 1 / 1
Jan 22 13:06:14.581: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 22 13:06:14.603: INFO: Selector matched 1 pods for map[app:redis]
Jan 22 13:06:14.603: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 22 13:06:14.603: INFO: wait on redis-master startup in e2e-tests-kubectl-n2kmb 
Jan 22 13:06:14.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-px8ht redis-master --namespace=e2e-tests-kubectl-n2kmb'
Jan 22 13:06:14.922: INFO: stderr: ""
Jan 22 13:06:14.922: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 22 Jan 13:06:12.650 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Jan 13:06:12.651 # Server started, Redis version 3.2.12\n1:M 22 Jan 13:06:12.651 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Jan 13:06:12.651 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 22 13:06:14.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-n2kmb'
Jan 22 13:06:15.247: INFO: stderr: ""
Jan 22 13:06:15.247: INFO: stdout: "service/rm2 exposed\n"
Jan 22 13:06:15.344: INFO: Service rm2 in namespace e2e-tests-kubectl-n2kmb found.
STEP: exposing service
Jan 22 13:06:17.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-n2kmb'
Jan 22 13:06:17.666: INFO: stderr: ""
Jan 22 13:06:17.667: INFO: stdout: "service/rm3 exposed\n"
Jan 22 13:06:17.686: INFO: Service rm3 in namespace e2e-tests-kubectl-n2kmb found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 13:06:19.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-n2kmb" for this suite.
Jan 22 13:06:43.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 13:06:44.085: INFO: namespace: e2e-tests-kubectl-n2kmb, resource: bindings, ignored listing per whitelist
Jan 22 13:06:44.117: INFO: namespace e2e-tests-kubectl-n2kmb deletion completed in 24.403312397s

• [SLOW TEST:42.958 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 13:06:44.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-5gvq7
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5gvq7 to expose endpoints map[]
Jan 22 13:06:44.585: INFO: Get endpoints failed (9.750593ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 22 13:06:45.597: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5gvq7 exposes endpoints map[] (1.022492318s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-5gvq7
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5gvq7 to expose endpoints map[pod1:[80]]
Jan 22 13:06:52.265: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (6.648798855s elapsed, will retry)
Jan 22 13:06:57.868: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5gvq7 exposes endpoints map[pod1:[80]] (12.252123165s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-5gvq7
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5gvq7 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 22 13:07:02.329: INFO: Unexpected endpoints: found map[0aafffe0-3d18-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.446672235s elapsed, will retry)
Jan 22 13:07:07.142: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5gvq7 exposes endpoints map[pod1:[80] pod2:[80]] (9.259730879s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-5gvq7
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5gvq7 to expose endpoints map[pod2:[80]]
Jan 22 13:07:08.319: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5gvq7 exposes endpoints map[pod2:[80]] (1.157696857s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-5gvq7
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5gvq7 to expose endpoints map[]
Jan 22 13:07:10.129: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5gvq7 exposes endpoints map[] (1.789728063s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 13:07:10.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-5gvq7" for this suite.
Jan 22 13:07:35.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 13:07:35.311: INFO: namespace: e2e-tests-services-5gvq7, resource: bindings, ignored listing per whitelist
Jan 22 13:07:35.319: INFO: namespace e2e-tests-services-5gvq7 deletion completed in 24.283235477s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:51.202 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 13:07:35.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-2882de73-3d18-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 22 13:07:36.216: INFO: Waiting up to 5m0s for pod "pod-secrets-28c54acd-3d18-11ea-ad91-0242ac110005" in namespace "e2e-tests-secrets-4wvss" to be "success or failure"
Jan 22 13:07:36.227: INFO: Pod "pod-secrets-28c54acd-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.798741ms
Jan 22 13:07:38.314: INFO: Pod "pod-secrets-28c54acd-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09768759s
Jan 22 13:07:40.364: INFO: Pod "pod-secrets-28c54acd-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147677935s
Jan 22 13:07:42.409: INFO: Pod "pod-secrets-28c54acd-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192492346s
Jan 22 13:07:44.427: INFO: Pod "pod-secrets-28c54acd-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.210895863s
Jan 22 13:07:46.442: INFO: Pod "pod-secrets-28c54acd-3d18-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.225739545s
STEP: Saw pod success
Jan 22 13:07:46.442: INFO: Pod "pod-secrets-28c54acd-3d18-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 13:07:46.446: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-28c54acd-3d18-11ea-ad91-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 22 13:07:46.604: INFO: Waiting for pod pod-secrets-28c54acd-3d18-11ea-ad91-0242ac110005 to disappear
Jan 22 13:07:46.686: INFO: Pod pod-secrets-28c54acd-3d18-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 13:07:46.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4wvss" for this suite.
Jan 22 13:07:52.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 13:07:53.053: INFO: namespace: e2e-tests-secrets-4wvss, resource: bindings, ignored listing per whitelist
Jan 22 13:07:53.073: INFO: namespace e2e-tests-secrets-4wvss deletion completed in 6.314363674s
STEP: Destroying namespace "e2e-tests-secret-namespace-dthz8" for this suite.
Jan 22 13:07:59.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 13:07:59.241: INFO: namespace: e2e-tests-secret-namespace-dthz8, resource: bindings, ignored listing per whitelist
Jan 22 13:07:59.301: INFO: namespace e2e-tests-secret-namespace-dthz8 deletion completed in 6.227709987s

• [SLOW TEST:23.981 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 13:07:59.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-qln74/secret-test-36d48565-3d18-11ea-ad91-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 22 13:07:59.731: INFO: Waiting up to 5m0s for pod "pod-configmaps-36db3444-3d18-11ea-ad91-0242ac110005" in namespace "e2e-tests-secrets-qln74" to be "success or failure"
Jan 22 13:07:59.857: INFO: Pod "pod-configmaps-36db3444-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 125.982874ms
Jan 22 13:08:02.694: INFO: Pod "pod-configmaps-36db3444-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.962828789s
Jan 22 13:08:04.716: INFO: Pod "pod-configmaps-36db3444-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.984964956s
Jan 22 13:08:08.773: INFO: Pod "pod-configmaps-36db3444-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.042420728s
Jan 22 13:08:10.794: INFO: Pod "pod-configmaps-36db3444-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.063120836s
Jan 22 13:08:12.848: INFO: Pod "pod-configmaps-36db3444-3d18-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.117140352s
STEP: Saw pod success
Jan 22 13:08:12.848: INFO: Pod "pod-configmaps-36db3444-3d18-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 13:08:12.890: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-36db3444-3d18-11ea-ad91-0242ac110005 container env-test: 
STEP: delete the pod
Jan 22 13:08:13.091: INFO: Waiting for pod pod-configmaps-36db3444-3d18-11ea-ad91-0242ac110005 to disappear
Jan 22 13:08:13.265: INFO: Pod pod-configmaps-36db3444-3d18-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 13:08:13.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-qln74" for this suite.
Jan 22 13:08:19.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 13:08:19.343: INFO: namespace: e2e-tests-secrets-qln74, resource: bindings, ignored listing per whitelist
Jan 22 13:08:19.427: INFO: namespace e2e-tests-secrets-qln74 deletion completed in 6.151225058s

• [SLOW TEST:20.126 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 13:08:19.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-47z7f
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 22 13:08:19.747: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 22 13:08:54.400: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-47z7f PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 22 13:08:54.400: INFO: >>> kubeConfig: /root/.kube/config
I0122 13:08:54.513960       8 log.go:172] (0xc000f86160) (0xc00210c1e0) Create stream
I0122 13:08:54.514031       8 log.go:172] (0xc000f86160) (0xc00210c1e0) Stream added, broadcasting: 1
I0122 13:08:54.522100       8 log.go:172] (0xc000f86160) Reply frame received for 1
I0122 13:08:54.522173       8 log.go:172] (0xc000f86160) (0xc0014e2000) Create stream
I0122 13:08:54.522185       8 log.go:172] (0xc000f86160) (0xc0014e2000) Stream added, broadcasting: 3
I0122 13:08:54.523510       8 log.go:172] (0xc000f86160) Reply frame received for 3
I0122 13:08:54.523560       8 log.go:172] (0xc000f86160) (0xc00201e000) Create stream
I0122 13:08:54.523570       8 log.go:172] (0xc000f86160) (0xc00201e000) Stream added, broadcasting: 5
I0122 13:08:54.524741       8 log.go:172] (0xc000f86160) Reply frame received for 5
I0122 13:08:54.741796       8 log.go:172] (0xc000f86160) Data frame received for 3
I0122 13:08:54.741918       8 log.go:172] (0xc0014e2000) (3) Data frame handling
I0122 13:08:54.741980       8 log.go:172] (0xc0014e2000) (3) Data frame sent
I0122 13:08:54.895393       8 log.go:172] (0xc000f86160) Data frame received for 1
I0122 13:08:54.895487       8 log.go:172] (0xc00210c1e0) (1) Data frame handling
I0122 13:08:54.895511       8 log.go:172] (0xc00210c1e0) (1) Data frame sent
I0122 13:08:54.895919       8 log.go:172] (0xc000f86160) (0xc00210c1e0) Stream removed, broadcasting: 1
I0122 13:08:54.896259       8 log.go:172] (0xc000f86160) (0xc0014e2000) Stream removed, broadcasting: 3
I0122 13:08:54.896882       8 log.go:172] (0xc000f86160) (0xc00201e000) Stream removed, broadcasting: 5
I0122 13:08:54.896986       8 log.go:172] (0xc000f86160) (0xc00210c1e0) Stream removed, broadcasting: 1
I0122 13:08:54.897001       8 log.go:172] (0xc000f86160) (0xc0014e2000) Stream removed, broadcasting: 3
I0122 13:08:54.897010       8 log.go:172] (0xc000f86160) (0xc00201e000) Stream removed, broadcasting: 5
Jan 22 13:08:54.897: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 13:08:54.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0122 13:08:54.898496       8 log.go:172] (0xc000f86160) Go away received
STEP: Destroying namespace "e2e-tests-pod-network-test-47z7f" for this suite.
Jan 22 13:09:18.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 13:09:19.122: INFO: namespace: e2e-tests-pod-network-test-47z7f, resource: bindings, ignored listing per whitelist
Jan 22 13:09:19.136: INFO: namespace e2e-tests-pod-network-test-47z7f deletion completed in 24.220124823s

• [SLOW TEST:59.708 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 13:09:19.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 22 13:09:32.674: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 13:09:34.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-8cmjd" for this suite.
Jan 22 13:09:58.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 13:09:59.039: INFO: namespace: e2e-tests-replicaset-8cmjd, resource: bindings, ignored listing per whitelist
Jan 22 13:09:59.128: INFO: namespace e2e-tests-replicaset-8cmjd deletion completed in 25.11779072s

• [SLOW TEST:39.992 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 13:09:59.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-7e1e01ce-3d18-11ea-ad91-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-7e1e0242-3d18-11ea-ad91-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-7e1e01ce-3d18-11ea-ad91-0242ac110005
STEP: Updating configmap cm-test-opt-upd-7e1e0242-3d18-11ea-ad91-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-7e1e027a-3d18-11ea-ad91-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 13:11:34.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sbrzl" for this suite.
Jan 22 13:11:58.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 13:11:59.161: INFO: namespace: e2e-tests-projected-sbrzl, resource: bindings, ignored listing per whitelist
Jan 22 13:11:59.280: INFO: namespace e2e-tests-projected-sbrzl deletion completed in 24.481699412s

• [SLOW TEST:120.152 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 13:11:59.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 22 13:11:59.592: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5d4ef16-3d18-11ea-ad91-0242ac110005" in namespace "e2e-tests-downward-api-4wtc6" to be "success or failure"
Jan 22 13:11:59.683: INFO: Pod "downwardapi-volume-c5d4ef16-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 90.660249ms
Jan 22 13:12:02.235: INFO: Pod "downwardapi-volume-c5d4ef16-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.642735497s
Jan 22 13:12:04.278: INFO: Pod "downwardapi-volume-c5d4ef16-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.685755432s
Jan 22 13:12:07.338: INFO: Pod "downwardapi-volume-c5d4ef16-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.745467205s
Jan 22 13:12:09.759: INFO: Pod "downwardapi-volume-c5d4ef16-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.166180522s
Jan 22 13:12:11.768: INFO: Pod "downwardapi-volume-c5d4ef16-3d18-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.175183905s
STEP: Saw pod success
Jan 22 13:12:11.768: INFO: Pod "downwardapi-volume-c5d4ef16-3d18-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 13:12:11.773: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c5d4ef16-3d18-11ea-ad91-0242ac110005 container client-container: 
STEP: delete the pod
Jan 22 13:12:12.708: INFO: Waiting for pod downwardapi-volume-c5d4ef16-3d18-11ea-ad91-0242ac110005 to disappear
Jan 22 13:12:12.722: INFO: Pod downwardapi-volume-c5d4ef16-3d18-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 13:12:12.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4wtc6" for this suite.
Jan 22 13:12:20.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 13:12:20.847: INFO: namespace: e2e-tests-downward-api-4wtc6, resource: bindings, ignored listing per whitelist
Jan 22 13:12:20.923: INFO: namespace e2e-tests-downward-api-4wtc6 deletion completed in 8.176944365s

• [SLOW TEST:21.641 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 22 13:12:20.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan 22 13:12:21.036: INFO: Waiting up to 5m0s for pod "var-expansion-d29dd0d3-3d18-11ea-ad91-0242ac110005" in namespace "e2e-tests-var-expansion-8v4ll" to be "success or failure"
Jan 22 13:12:21.043: INFO: Pod "var-expansion-d29dd0d3-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.517225ms
Jan 22 13:12:23.359: INFO: Pod "var-expansion-d29dd0d3-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322759795s
Jan 22 13:12:25.396: INFO: Pod "var-expansion-d29dd0d3-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.359698813s
Jan 22 13:12:27.728: INFO: Pod "var-expansion-d29dd0d3-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.691312335s
Jan 22 13:12:29.995: INFO: Pod "var-expansion-d29dd0d3-3d18-11ea-ad91-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.959001347s
Jan 22 13:12:32.015: INFO: Pod "var-expansion-d29dd0d3-3d18-11ea-ad91-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.978364137s
STEP: Saw pod success
Jan 22 13:12:32.015: INFO: Pod "var-expansion-d29dd0d3-3d18-11ea-ad91-0242ac110005" satisfied condition "success or failure"
Jan 22 13:12:32.023: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-d29dd0d3-3d18-11ea-ad91-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 22 13:12:32.394: INFO: Waiting for pod var-expansion-d29dd0d3-3d18-11ea-ad91-0242ac110005 to disappear
Jan 22 13:12:32.589: INFO: Pod var-expansion-d29dd0d3-3d18-11ea-ad91-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 22 13:12:32.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-8v4ll" for this suite.
Jan 22 13:12:38.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 22 13:12:38.877: INFO: namespace: e2e-tests-var-expansion-8v4ll, resource: bindings, ignored listing per whitelist
Jan 22 13:12:38.958: INFO: namespace e2e-tests-var-expansion-8v4ll deletion completed in 6.337081971s

• [SLOW TEST:18.035 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSJan 22 13:12:38.959: INFO: Running AfterSuite actions on all nodes
Jan 22 13:12:38.959: INFO: Running AfterSuite actions on node 1
Jan 22 13:12:38.959: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8722.780 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS