I0127 12:56:07.678302 8 e2e.go:243] Starting e2e run "3ff3f3ac-6df1-4bf3-bdbb-9eab3737a556" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580129766 - Will randomize all specs Will run 215 of 4412 specs Jan 27 12:56:08.037: INFO: >>> kubeConfig: /root/.kube/config Jan 27 12:56:08.040: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 27 12:56:08.064: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 27 12:56:08.095: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 27 12:56:08.095: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 27 12:56:08.095: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 27 12:56:08.113: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 27 12:56:08.113: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 27 12:56:08.113: INFO: e2e test version: v1.15.7 Jan 27 12:56:08.115: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 12:56:08.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency Jan 27 12:56:08.163: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-8306 I0127 12:56:08.170526 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8306, replica count: 1 I0127 12:56:09.221305 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0127 12:56:10.221673 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0127 12:56:11.222062 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0127 12:56:12.222479 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0127 12:56:13.222929 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0127 12:56:14.223347 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0127 12:56:15.223719 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0127 12:56:16.224142 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 12:56:16.419: INFO: Created: latency-svc-8bqw4 Jan 27 12:56:16.448: INFO: Got endpoints: latency-svc-8bqw4 [123.796612ms] Jan 27 12:56:16.535: INFO: Created: latency-svc-7zpzn Jan 27 12:56:16.604: INFO: Got endpoints: latency-svc-7zpzn [155.409112ms] Jan 27 12:56:16.661: INFO: Created: latency-svc-jtjdd Jan 27 12:56:16.685: INFO: Got endpoints: latency-svc-jtjdd [234.972211ms] Jan 27 12:56:16.813: INFO: Created: latency-svc-mgcbs Jan 27 12:56:16.856: INFO: Got endpoints: latency-svc-mgcbs [405.753683ms] Jan 27 12:56:16.989: INFO: Created: latency-svc-r8dtp Jan 27 12:56:17.040: INFO: Created: latency-svc-29rrw Jan 27 12:56:17.040: INFO: Got endpoints: latency-svc-r8dtp [590.058853ms] Jan 27 12:56:17.058: INFO: Got endpoints: latency-svc-29rrw [608.355276ms] Jan 27 12:56:17.078: INFO: Created: latency-svc-96h2d Jan 27 12:56:17.232: INFO: Got endpoints: latency-svc-96h2d [781.924978ms] Jan 27 12:56:17.257: INFO: Created: latency-svc-dfjzh Jan 27 12:56:17.281: INFO: Got endpoints: latency-svc-dfjzh [829.63823ms] Jan 27 12:56:17.329: INFO: Created: latency-svc-prqxr Jan 27 12:56:17.398: INFO: Got endpoints: latency-svc-prqxr [949.505697ms] Jan 27 12:56:17.425: INFO: Created: latency-svc-kjjp8 Jan 27 12:56:17.446: INFO: Got endpoints: latency-svc-kjjp8 [995.352572ms] Jan 27 12:56:17.490: INFO: Created: latency-svc-xz68r Jan 27 12:56:17.588: INFO: Got endpoints: latency-svc-xz68r [1.138896138s] Jan 27 12:56:17.617: INFO: Created: latency-svc-jbrql Jan 27 12:56:17.635: INFO: Got endpoints: latency-svc-jbrql [1.18625247s] Jan 27 12:56:17.664: INFO: Created: latency-svc-7g2tl Jan 27 12:56:17.784: INFO: Got endpoints: latency-svc-7g2tl [1.333177866s] Jan 27 12:56:17.794: INFO: Created: latency-svc-nd2hl Jan 27 12:56:17.854: INFO: Got endpoints: latency-svc-nd2hl [218.424411ms] Jan 27 12:56:17.946: INFO: Created: latency-svc-9ssw8 Jan 27 12:56:17.978: INFO: Got endpoints: latency-svc-9ssw8 [1.527303221s] Jan 27 12:56:18.015: INFO: Created: latency-svc-r2wl9 Jan 27 12:56:18.021: INFO: Got endpoints: latency-svc-r2wl9 [1.570210516s] Jan 27 12:56:18.178: INFO: Created: latency-svc-wgzjw Jan 27 12:56:18.183: INFO: Got endpoints: latency-svc-wgzjw [1.732426497s] Jan 27 12:56:18.237: INFO: Created: latency-svc-qr2fp Jan 27 12:56:18.245: INFO: Got endpoints: latency-svc-qr2fp [1.640231069s] Jan 27 12:56:18.276: INFO: Created: latency-svc-zt8mr Jan 27 12:56:18.354: INFO: Got endpoints: latency-svc-zt8mr [1.668554343s] Jan 27 12:56:18.395: INFO: Created: latency-svc-tml8p Jan 27 12:56:18.411: INFO: Got endpoints: latency-svc-tml8p [1.55521675s] Jan 27 12:56:18.438: INFO: Created: latency-svc-h28bm Jan 27 12:56:18.527: INFO: Got endpoints: latency-svc-h28bm [1.486825305s] Jan 27 12:56:18.540: INFO: Created: latency-svc-ggr6h Jan 27 12:56:18.544: INFO: Got endpoints: latency-svc-ggr6h [1.485792909s] Jan 27 12:56:18.610: INFO: Created: latency-svc-lnd6v Jan 27 12:56:18.617: INFO: Got endpoints: latency-svc-lnd6v [1.385088121s] Jan 27 12:56:18.757: INFO: Created: latency-svc-bqdnd Jan 27 12:56:18.776: INFO: Got endpoints: latency-svc-bqdnd [1.495480914s] Jan 27 12:56:18.858: INFO: Created: latency-svc-dpcp4 Jan 27 12:56:18.866: INFO: Got endpoints: latency-svc-dpcp4 [1.467746129s] Jan 27 12:56:18.911: INFO: Created: latency-svc-6gk8c Jan 27 12:56:18.915: INFO: Got endpoints: latency-svc-6gk8c [1.468232245s] Jan 27 12:56:18.956: INFO: Created: latency-svc-8nvt2 Jan 27 12:56:19.063: INFO: Got endpoints: latency-svc-8nvt2 [1.474647771s] Jan 27 12:56:19.070: INFO: Created: latency-svc-wh8r5 Jan 27 12:56:19.090: INFO: Got endpoints: latency-svc-wh8r5 [1.305687236s] Jan 27 12:56:19.225: INFO: Created: latency-svc-snqf7 Jan 27 12:56:19.235: INFO: Got endpoints: latency-svc-snqf7 [1.381511627s] Jan 27 12:56:19.258: INFO: Created: latency-svc-mkzwh Jan 27 12:56:19.272: INFO: Got endpoints: latency-svc-mkzwh [1.294557687s] Jan 27 12:56:19.313: INFO: Created: latency-svc-v5f9p Jan 27 12:56:19.364: INFO: Got endpoints: latency-svc-v5f9p [1.343394062s] Jan 27 12:56:19.412: INFO: Created: latency-svc-dc6v6 Jan 27 12:56:19.429: INFO: Got endpoints: latency-svc-dc6v6 [1.24591629s] Jan 27 12:56:19.559: INFO: Created: latency-svc-wmw8h Jan 27 12:56:19.571: INFO: Got endpoints: latency-svc-wmw8h [1.326844906s] Jan 27 12:56:19.607: INFO: Created: latency-svc-s9dkk Jan 27 12:56:19.618: INFO: Got endpoints: latency-svc-s9dkk [1.263091552s] Jan 27 12:56:19.738: INFO: Created: latency-svc-p7qn9 Jan 27 12:56:19.743: INFO: Got endpoints: latency-svc-p7qn9 [1.332057871s] Jan 27 12:56:19.838: INFO: Created: latency-svc-5wj7z Jan 27 12:56:19.974: INFO: Got endpoints: latency-svc-5wj7z [1.446334533s] Jan 27 12:56:19.979: INFO: Created: latency-svc-ff7vb Jan 27 12:56:19.998: INFO: Got endpoints: latency-svc-ff7vb [1.453648255s] Jan 27 12:56:20.056: INFO: Created: latency-svc-7j8bg Jan 27 12:56:20.175: INFO: Got endpoints: latency-svc-7j8bg [1.557726834s] Jan 27 12:56:20.176: INFO: Created: latency-svc-pwmrb Jan 27 12:56:20.189: INFO: Got endpoints: latency-svc-pwmrb [1.41193463s] Jan 27 12:56:20.257: INFO: Created: latency-svc-l8bm6 Jan 27 12:56:20.341: INFO: Got endpoints: latency-svc-l8bm6 [1.474916701s] Jan 27 12:56:20.348: INFO: Created: latency-svc-dv5mm Jan 27 12:56:20.362: INFO: Got endpoints: latency-svc-dv5mm [1.446885457s] Jan 27 12:56:20.416: INFO: Created: latency-svc-c4l68 Jan 27 12:56:20.524: INFO: Got endpoints: latency-svc-c4l68 [1.460844059s] Jan 27 12:56:20.547: INFO: Created: latency-svc-8bzb2 Jan 27 12:56:20.563: INFO: Got endpoints: latency-svc-8bzb2 [1.472822505s] Jan 27 12:56:20.618: INFO: Created: latency-svc-jhtfr Jan 27 12:56:20.628: INFO: Got endpoints: latency-svc-jhtfr [1.392254184s] Jan 27 12:56:20.708: INFO: Created: latency-svc-29c25 Jan 27 12:56:20.736: INFO: Got endpoints: latency-svc-29c25 [1.463290736s] Jan 27 12:56:20.785: INFO: Created: latency-svc-7vfx9 Jan 27 12:56:20.790: INFO: Got endpoints: latency-svc-7vfx9 [1.424520315s] Jan 27 12:56:20.881: INFO: Created: latency-svc-pl75k Jan 27 12:56:20.903: INFO: Got endpoints: latency-svc-pl75k [1.473403163s] Jan 27 12:56:20.941: INFO: Created: latency-svc-6qztj Jan 27 12:56:20.946: INFO: Got endpoints: latency-svc-6qztj [1.374185919s] Jan 27 12:56:21.039: INFO: Created: latency-svc-vbwpt Jan 27 12:56:21.058: INFO: Got endpoints: latency-svc-vbwpt [1.440094191s] Jan 27 12:56:21.078: INFO: Created: latency-svc-m7z2q Jan 27 12:56:21.082: INFO: Got endpoints: latency-svc-m7z2q [1.339014367s] Jan 27 12:56:21.209: INFO: Created: latency-svc-mkxrw Jan 27 12:56:21.219: INFO: Got endpoints: latency-svc-mkxrw [1.244972204s] Jan 27 12:56:21.280: INFO: Created: latency-svc-cgdjx Jan 27 12:56:21.285: INFO: Got endpoints: latency-svc-cgdjx [1.286851235s] Jan 27 12:56:21.433: INFO: Created: latency-svc-fc25x Jan 27 12:56:21.441: INFO: Got endpoints: latency-svc-fc25x [1.26616447s] Jan 27 12:56:21.476: INFO: Created: latency-svc-7lwl8 Jan 27 12:56:21.508: INFO: Got endpoints: latency-svc-7lwl8 [1.318767503s] Jan 27 12:56:21.509: INFO: Created: latency-svc-7cpzs Jan 27 12:56:21.511: INFO: Got endpoints: latency-svc-7cpzs [1.17005043s] Jan 27 12:56:21.656: INFO: Created: latency-svc-5vv64 Jan 27 12:56:21.660: INFO: Got endpoints: latency-svc-5vv64 [1.297503526s] Jan 27 12:56:21.701: INFO: Created: latency-svc-bvh6f Jan 27 12:56:21.722: INFO: Got endpoints: latency-svc-bvh6f [1.197604428s] Jan 27 12:56:21.897: INFO: Created: latency-svc-v78qm Jan 27 12:56:21.961: INFO: Got endpoints: latency-svc-v78qm [1.398316641s] Jan 27 12:56:21.965: INFO: Created: latency-svc-hglls Jan 27 12:56:22.084: INFO: Got endpoints: latency-svc-hglls [1.456307001s] Jan 27 12:56:22.166: INFO: Created: latency-svc-kfvz6 Jan 27 12:56:22.176: INFO: Got endpoints: latency-svc-kfvz6 [1.440144373s] Jan 27 12:56:22.388: INFO: Created: latency-svc-fklvm Jan 27 12:56:22.397: INFO: Got endpoints: latency-svc-fklvm [1.607739415s] Jan 27 12:56:22.457: INFO: Created: latency-svc-fzqvs Jan 27 12:56:22.631: INFO: Got endpoints: latency-svc-fzqvs [1.72814568s] Jan 27 12:56:22.673: INFO: Created: latency-svc-4bl4d Jan 27 12:56:22.681: INFO: Got endpoints: latency-svc-4bl4d [1.734579467s] Jan 27 12:56:22.723: INFO: Created: latency-svc-gjth6 Jan 27 12:56:22.851: INFO: Got endpoints: latency-svc-gjth6 [1.792559092s] Jan 27 12:56:22.923: INFO: Created: latency-svc-rpt8n Jan 27 12:56:23.054: INFO: Got endpoints: latency-svc-rpt8n [1.971850318s] Jan 27 12:56:23.076: INFO: Created: latency-svc-s6npr Jan 27 12:56:23.141: INFO: Got endpoints: latency-svc-s6npr [1.921325459s] Jan 27 12:56:23.158: INFO: Created: latency-svc-jjz9d Jan 27 12:56:23.261: INFO: Got endpoints: latency-svc-jjz9d [1.97661793s] Jan 27 12:56:23.284: INFO: Created: latency-svc-gn7ml Jan 27 12:56:23.294: INFO: Got endpoints: latency-svc-gn7ml [1.852974803s] Jan 27 12:56:23.350: INFO: Created: latency-svc-gjktx Jan 27 12:56:23.532: INFO: Got endpoints: latency-svc-gjktx [2.023704587s] Jan 27 12:56:23.556: INFO: Created: latency-svc-26kxh Jan 27 12:56:23.592: INFO: Got endpoints: latency-svc-26kxh [2.081079619s] Jan 27 12:56:23.748: INFO: Created: latency-svc-6qxgs Jan 27 12:56:23.756: INFO: Got endpoints: latency-svc-6qxgs [2.096135364s] Jan 27 12:56:23.805: INFO: Created: latency-svc-8zcz9 Jan 27 12:56:23.827: INFO: Got endpoints: latency-svc-8zcz9 [2.104684452s] Jan 27 12:56:23.991: INFO: Created: latency-svc-dd4gc Jan 27 12:56:24.019: INFO: Got endpoints: latency-svc-dd4gc [2.057819088s] Jan 27 12:56:24.175: INFO: Created: latency-svc-jl72c Jan 27 12:56:24.187: INFO: Got endpoints: latency-svc-jl72c [2.102197729s] Jan 27 12:56:24.241: INFO: Created: latency-svc-mj9d9 Jan 27 12:56:24.365: INFO: Got endpoints: latency-svc-mj9d9 [2.18833177s] Jan 27 12:56:24.368: INFO: Created: latency-svc-v9pf8 Jan 27 12:56:24.390: INFO: Got endpoints: latency-svc-v9pf8 [1.992542173s] Jan 27 12:56:24.415: INFO: Created: latency-svc-c526p Jan 27 12:56:24.426: INFO: Got endpoints: latency-svc-c526p [1.794196047s] Jan 27 12:56:24.566: INFO: Created: latency-svc-cff2r Jan 27 12:56:24.581: INFO: Got endpoints: latency-svc-cff2r [1.899937332s] Jan 27 12:56:24.616: INFO: Created: latency-svc-kfnsp Jan 27 12:56:24.654: INFO: Got endpoints: latency-svc-kfnsp [1.80250119s] Jan 27 12:56:24.656: INFO: Created: latency-svc-zb4dn Jan 27 12:56:24.760: INFO: Got endpoints: latency-svc-zb4dn [1.705042754s] Jan 27 12:56:24.781: INFO: Created: latency-svc-k5nnc Jan 27 12:56:24.790: INFO: Got endpoints: latency-svc-k5nnc [1.648430909s] Jan 27 12:56:24.863: INFO: Created: latency-svc-72zr5 Jan 27 12:56:24.945: INFO: Got endpoints: latency-svc-72zr5 [1.68324734s] Jan 27 12:56:24.970: INFO: Created: latency-svc-hvxcz Jan 27 12:56:24.990: INFO: Got endpoints: latency-svc-hvxcz [1.695512127s] Jan 27 12:56:25.028: INFO: Created: latency-svc-kfnqr Jan 27 12:56:25.039: INFO: Got endpoints: latency-svc-kfnqr [1.50714913s] Jan 27 12:56:25.191: INFO: Created: latency-svc-9qrss Jan 27 12:56:25.197: INFO: Got endpoints: latency-svc-9qrss [1.604289009s] Jan 27 12:56:25.233: INFO: Created: latency-svc-k5px2 Jan 27 12:56:25.242: INFO: Got endpoints: latency-svc-k5px2 [1.485673476s] Jan 27 12:56:25.285: INFO: Created: latency-svc-p7l9s Jan 27 12:56:25.405: INFO: Got endpoints: latency-svc-p7l9s [1.57762069s] Jan 27 12:56:25.419: INFO: Created: latency-svc-4sqvr Jan 27 12:56:25.426: INFO: Got endpoints: latency-svc-4sqvr [1.406290433s] Jan 27 12:56:25.473: INFO: Created: latency-svc-kf76d Jan 27 12:56:25.581: INFO: Got endpoints: latency-svc-kf76d [1.393466004s] Jan 27 12:56:25.590: INFO: Created: latency-svc-9ncc7 Jan 27 12:56:25.597: INFO: Got endpoints: latency-svc-9ncc7 [1.232021125s] Jan 27 12:56:25.634: INFO: Created: latency-svc-7wgpk Jan 27 12:56:25.642: INFO: Got endpoints: latency-svc-7wgpk [1.251149048s] Jan 27 12:56:25.687: INFO: Created: latency-svc-vg7q6 Jan 27 12:56:25.804: INFO: Got endpoints: latency-svc-vg7q6 [1.378378725s] Jan 27 12:56:25.830: INFO: Created: latency-svc-hvrdx Jan 27 12:56:25.840: INFO: Got endpoints: latency-svc-hvrdx [1.258201482s] Jan 27 12:56:25.897: INFO: Created: latency-svc-48ltd Jan 27 12:56:26.010: INFO: Got endpoints: latency-svc-48ltd [1.35569108s] Jan 27 12:56:26.028: INFO: Created: latency-svc-cslnx Jan 27 12:56:26.053: INFO: Got endpoints: latency-svc-cslnx [1.292962513s] Jan 27 12:56:26.336: INFO: Created: latency-svc-dh42p Jan 27 12:56:26.342: INFO: Got endpoints: latency-svc-dh42p [1.551801465s] Jan 27 12:56:26.398: INFO: Created: latency-svc-rbkfc Jan 27 12:56:26.403: INFO: Got endpoints: latency-svc-rbkfc [1.45801636s] Jan 27 12:56:26.428: INFO: Created: latency-svc-j5m7q Jan 27 12:56:26.552: INFO: Got endpoints: latency-svc-j5m7q [1.561332876s] Jan 27 12:56:26.582: INFO: Created: latency-svc-xdmp8 Jan 27 12:56:26.607: INFO: Created: latency-svc-np77g Jan 27 12:56:26.617: INFO: Got endpoints: latency-svc-xdmp8 [1.577301599s] Jan 27 12:56:26.627: INFO: Got endpoints: latency-svc-np77g [1.430012347s] Jan 27 12:56:26.797: INFO: Created: latency-svc-9flmb Jan 27 12:56:26.821: INFO: Got endpoints: latency-svc-9flmb [1.578474833s] Jan 27 12:56:26.864: INFO: Created: latency-svc-lt8gp Jan 27 12:56:26.890: INFO: Got endpoints: latency-svc-lt8gp [1.484034288s] Jan 27 12:56:27.075: INFO: Created: latency-svc-wm49w Jan 27 12:56:27.084: INFO: Got endpoints: latency-svc-wm49w [1.657434478s] Jan 27 12:56:27.171: INFO: Created: latency-svc-tkh4j Jan 27 12:56:27.297: INFO: Got endpoints: latency-svc-tkh4j [1.715991408s] Jan 27 12:56:27.325: INFO: Created: latency-svc-dxznf Jan 27 12:56:27.343: INFO: Got endpoints: latency-svc-dxznf [1.745563411s] Jan 27 12:56:27.363: INFO: Created: latency-svc-zk9wt Jan 27 12:56:27.371: INFO: Got endpoints: latency-svc-zk9wt [1.729282745s] Jan 27 12:56:27.538: INFO: Created: latency-svc-kg6ts Jan 27 12:56:27.547: INFO: Got endpoints: latency-svc-kg6ts [1.742867498s] Jan 27 12:56:27.590: INFO: Created: latency-svc-jvlw4 Jan 27 12:56:27.604: INFO: Got endpoints: latency-svc-jvlw4 [1.764630801s] Jan 27 12:56:27.754: INFO: Created: latency-svc-676wx Jan 27 12:56:27.764: INFO: Got endpoints: latency-svc-676wx [1.753588862s] Jan 27 12:56:28.089: INFO: Created: latency-svc-2gbhl Jan 27 12:56:28.112: INFO: Got endpoints: latency-svc-2gbhl [2.058555619s] Jan 27 12:56:28.273: INFO: Created: latency-svc-x42c9 Jan 27 12:56:28.281: INFO: Got endpoints: latency-svc-x42c9 [1.938680943s] Jan 27 12:56:28.344: INFO: Created: latency-svc-9gjl2 Jan 27 12:56:28.359: INFO: Got endpoints: latency-svc-9gjl2 [1.95598824s] Jan 27 12:56:28.492: INFO: Created: latency-svc-xp6pp Jan 27 12:56:28.534: INFO: Got endpoints: latency-svc-xp6pp [1.981831049s] Jan 27 12:56:28.576: INFO: Created: latency-svc-vbs8h Jan 27 12:56:28.749: INFO: Got endpoints: latency-svc-vbs8h [2.131746231s] Jan 27 12:56:28.760: INFO: Created: latency-svc-kdcjg Jan 27 12:56:28.774: INFO: Got endpoints: latency-svc-kdcjg [2.146875785s] Jan 27 12:56:28.826: INFO: Created: latency-svc-s9kvg Jan 27 12:56:29.048: INFO: Got endpoints: latency-svc-s9kvg [2.226995479s] Jan 27 12:56:29.052: INFO: Created: latency-svc-j2pn9 Jan 27 12:56:29.066: INFO: Got endpoints: latency-svc-j2pn9 [2.175507066s] Jan 27 12:56:29.138: INFO: Created: latency-svc-4ktpn Jan 27 12:56:29.372: INFO: Got endpoints: latency-svc-4ktpn [2.287945372s] Jan 27 12:56:29.385: INFO: Created: latency-svc-c44rd Jan 27 12:56:29.400: INFO: Got endpoints: latency-svc-c44rd [2.102456244s] Jan 27 12:56:29.616: INFO: Created: latency-svc-v9cv5 Jan 27 12:56:29.626: INFO: Got endpoints: latency-svc-v9cv5 [2.283137589s] Jan 27 12:56:29.721: INFO: Created: latency-svc-kngcs Jan 27 12:56:29.891: INFO: Got endpoints: latency-svc-kngcs [2.519863666s] Jan 27 12:56:29.946: INFO: Created: latency-svc-rhxck Jan 27 12:56:29.978: INFO: Got endpoints: latency-svc-rhxck [2.430740007s] Jan 27 12:56:30.192: INFO: Created: latency-svc-s2qrc Jan 27 12:56:30.239: INFO: Got endpoints: latency-svc-s2qrc [2.634378873s] Jan 27 12:56:30.271: INFO: Created: latency-svc-649r6 Jan 27 12:56:30.274: INFO: Got endpoints: latency-svc-649r6 [2.509254134s] Jan 27 12:56:30.427: INFO: Created: latency-svc-dtjzv Jan 27 12:56:30.438: INFO: Got endpoints: latency-svc-dtjzv [2.326103797s] Jan 27 12:56:30.464: INFO: Created: latency-svc-xkmh6 Jan 27 12:56:30.519: INFO: Created: latency-svc-fqm5t Jan 27 12:56:30.648: INFO: Got endpoints: latency-svc-fqm5t [2.288474392s] Jan 27 12:56:30.648: INFO: Got endpoints: latency-svc-xkmh6 [2.367328169s] Jan 27 12:56:30.661: INFO: Created: latency-svc-nn8wx Jan 27 12:56:30.683: INFO: Got endpoints: latency-svc-nn8wx [2.148528191s] Jan 27 12:56:30.743: INFO: Created: latency-svc-g2tsp Jan 27 12:56:30.870: INFO: Got endpoints: latency-svc-g2tsp [2.120864856s] Jan 27 12:56:30.887: INFO: Created: latency-svc-9pc24 Jan 27 12:56:30.903: INFO: Got endpoints: latency-svc-9pc24 [2.128528379s] Jan 27 12:56:30.961: INFO: Created: latency-svc-hfrpb Jan 27 12:56:30.963: INFO: Got endpoints: latency-svc-hfrpb [1.914848563s] Jan 27 12:56:31.154: INFO: Created: latency-svc-n6l6n Jan 27 12:56:31.183: INFO: Got endpoints: latency-svc-n6l6n [2.117379861s] Jan 27 12:56:31.237: INFO: Created: latency-svc-rg9sx Jan 27 12:56:31.460: INFO: Got endpoints: latency-svc-rg9sx [2.087986904s] Jan 27 12:56:31.463: INFO: Created: latency-svc-79sd8 Jan 27 12:56:31.472: INFO: Got endpoints: latency-svc-79sd8 [2.071711198s] Jan 27 12:56:31.519: INFO: Created: latency-svc-md77m Jan 27 12:56:31.525: INFO: Got endpoints: latency-svc-md77m [1.898765893s] Jan 27 12:56:31.557: INFO: Created: latency-svc-57gxk Jan 27 12:56:31.660: INFO: Got endpoints: latency-svc-57gxk [1.768781744s] Jan 27 12:56:31.706: INFO: Created: latency-svc-pgjcn Jan 27 12:56:31.707: INFO: Got endpoints: latency-svc-pgjcn [1.728092274s] Jan 27 12:56:31.757: INFO: Created: latency-svc-4k558 Jan 27 12:56:31.889: INFO: Got endpoints: latency-svc-4k558 [1.649693896s] Jan 27 12:56:31.926: INFO: Created: latency-svc-spx9z Jan 27 12:56:31.933: INFO: Got endpoints: latency-svc-spx9z [1.659524907s] Jan 27 12:56:31.978: INFO: Created: latency-svc-v6qbl Jan 27 12:56:31.980: INFO: Got endpoints: latency-svc-v6qbl [1.541750428s] Jan 27 12:56:32.136: INFO: Created: latency-svc-vcrcp Jan 27 12:56:32.184: INFO: Got endpoints: latency-svc-vcrcp [1.535614792s] Jan 27 12:56:32.323: INFO: Created: latency-svc-6k84s Jan 27 12:56:32.336: INFO: Got endpoints: latency-svc-6k84s [1.68720224s] Jan 27 12:56:32.384: INFO: Created: latency-svc-s2q9h Jan 27 12:56:32.388: INFO: Got endpoints: latency-svc-s2q9h [1.705155616s] Jan 27 12:56:32.524: INFO: Created: latency-svc-7gv6f Jan 27 12:56:32.562: INFO: Got endpoints: latency-svc-7gv6f [1.691531358s] Jan 27 12:56:32.567: INFO: Created: latency-svc-7lpf2 Jan 27 12:56:32.598: INFO: Got endpoints: latency-svc-7lpf2 [1.695122425s] Jan 27 12:56:32.672: INFO: Created: latency-svc-257nh Jan 27 12:56:32.681: INFO: Got endpoints: latency-svc-257nh [1.717335898s] Jan 27 12:56:32.730: INFO: Created: latency-svc-sksz5 Jan 27 12:56:32.757: INFO: Got endpoints: latency-svc-sksz5 [1.573249339s] Jan 27 12:56:32.769: INFO: Created: latency-svc-jrxns Jan 27 12:56:32.903: INFO: Got endpoints: latency-svc-jrxns [1.443125794s] Jan 27 12:56:32.941: INFO: Created: latency-svc-dwrqs Jan 27 12:56:32.958: INFO: Got endpoints: latency-svc-dwrqs [1.486084799s] Jan 27 12:56:33.103: INFO: Created: latency-svc-mfd8d Jan 27 12:56:33.108: INFO: Got endpoints: latency-svc-mfd8d [1.581998411s] Jan 27 12:56:33.297: INFO: Created: latency-svc-b98k9 Jan 27 12:56:33.318: INFO: Got endpoints: latency-svc-b98k9 [1.657096274s] Jan 27 12:56:33.346: INFO: Created: latency-svc-bxlqg Jan 27 12:56:33.352: INFO: Got endpoints: latency-svc-bxlqg [1.645793171s] Jan 27 12:56:33.473: INFO: Created: latency-svc-rt4gn Jan 27 12:56:33.473: INFO: Got endpoints: latency-svc-rt4gn [1.584550247s] Jan 27 12:56:33.509: INFO: Created: latency-svc-ft6m8 Jan 27 12:56:33.517: INFO: Got endpoints: latency-svc-ft6m8 [1.583542361s] Jan 27 12:56:33.548: INFO: Created: latency-svc-blxc7 Jan 27 12:56:33.687: INFO: Got endpoints: latency-svc-blxc7 [1.706231933s] Jan 27 12:56:33.709: INFO: Created: latency-svc-47dr2 Jan 27 12:56:33.727: INFO: Got endpoints: latency-svc-47dr2 [1.542596027s] Jan 27 12:56:33.759: INFO: Created: latency-svc-s4rq2 Jan 27 12:56:33.762: INFO: Got endpoints: latency-svc-s4rq2 [1.426820363s] Jan 27 12:56:33.939: INFO: Created: latency-svc-2qf45 Jan 27 12:56:33.944: INFO: Got endpoints: latency-svc-2qf45 [1.555924297s] Jan 27 12:56:34.187: INFO: Created: latency-svc-jcl9q Jan 27 12:56:34.194: INFO: Got endpoints: latency-svc-jcl9q [1.631613298s] Jan 27 12:56:34.387: INFO: Created: latency-svc-7khk8 Jan 27 12:56:34.397: INFO: Got endpoints: latency-svc-7khk8 [1.79813148s] Jan 27 12:56:34.467: INFO: Created: latency-svc-pkv7j Jan 27 12:56:34.645: INFO: Got endpoints: latency-svc-pkv7j [1.963262215s] Jan 27 12:56:34.651: INFO: Created: latency-svc-hbdqr Jan 27 12:56:34.656: INFO: Got endpoints: latency-svc-hbdqr [1.898783772s] Jan 27 12:56:34.827: INFO: Created: latency-svc-r7774 Jan 27 12:56:34.858: INFO: Got endpoints: latency-svc-r7774 [1.954055403s] Jan 27 12:56:34.910: INFO: Created: latency-svc-rxwj4 Jan 27 12:56:35.033: INFO: Got endpoints: latency-svc-rxwj4 [2.074237991s] Jan 27 12:56:35.066: INFO: Created: latency-svc-n8gxq Jan 27 12:56:35.286: INFO: Got endpoints: latency-svc-n8gxq [2.178122592s] Jan 27 12:56:35.309: INFO: Created: latency-svc-c2fdj Jan 27 12:56:35.309: INFO: Got endpoints: latency-svc-c2fdj [1.990735819s] Jan 27 12:56:35.371: INFO: Created: latency-svc-ggfr5 Jan 27 12:56:35.381: INFO: Got endpoints: latency-svc-ggfr5 [2.027766278s] Jan 27 12:56:35.597: INFO: Created: latency-svc-422l9 Jan 27 12:56:35.607: INFO: Got endpoints: latency-svc-422l9 [2.133301798s] Jan 27 12:56:35.775: INFO: Created: latency-svc-cd4f4 Jan 27 12:56:35.785: INFO: Got endpoints: latency-svc-cd4f4 [2.26770471s] Jan 27 12:56:35.834: INFO: Created: latency-svc-5ggmf Jan 27 12:56:35.853: INFO: Got endpoints: latency-svc-5ggmf [2.165951675s] Jan 27 12:56:36.002: INFO: Created: latency-svc-5kdv6 Jan 27 12:56:36.030: INFO: Got endpoints: latency-svc-5kdv6 [2.302838648s] Jan 27 12:56:36.075: INFO: Created: latency-svc-jr2fg Jan 27 12:56:36.235: INFO: Got endpoints: latency-svc-jr2fg [2.47217289s] Jan 27 12:56:36.255: INFO: Created: latency-svc-4tj6r Jan 27 12:56:36.278: INFO: Got endpoints: latency-svc-4tj6r [2.333278861s] Jan 27 12:56:36.278: INFO: Created: latency-svc-bm9kv Jan 27 12:56:36.299: INFO: Got endpoints: latency-svc-bm9kv [2.104030857s] Jan 27 12:56:36.444: INFO: Created: latency-svc-p97pp Jan 27 12:56:36.486: INFO: Got endpoints: latency-svc-p97pp [2.089031786s] Jan 27 12:56:36.491: INFO: Created: latency-svc-vj8dz Jan 27 12:56:36.529: INFO: Got endpoints: latency-svc-vj8dz [1.884377531s] Jan 27 12:56:36.542: INFO: Created: latency-svc-7wtmn Jan 27 12:56:36.623: INFO: Got endpoints: latency-svc-7wtmn [1.966313556s] Jan 27 12:56:36.645: INFO: Created: latency-svc-qtncv Jan 27 12:56:36.692: INFO: Got endpoints: latency-svc-qtncv [1.834089756s] Jan 27 12:56:36.714: INFO: Created: latency-svc-gt59j Jan 27 12:56:36.829: INFO: Got endpoints: latency-svc-gt59j [1.795694248s] Jan 27 12:56:36.849: INFO: Created: latency-svc-4tpbc Jan 27 12:56:36.904: INFO: Created: latency-svc-fmclp Jan 27 12:56:36.905: INFO: Got endpoints: latency-svc-4tpbc [1.618646234s] Jan 27 12:56:37.032: INFO: Got endpoints: latency-svc-fmclp [1.722910733s] Jan 27 12:56:37.062: INFO: Created: latency-svc-sdwxg Jan 27 12:56:37.079: INFO: Got endpoints: latency-svc-sdwxg [1.698149408s] Jan 27 12:56:37.224: INFO: Created: latency-svc-xnjgl Jan 27 12:56:37.257: INFO: Got endpoints: latency-svc-xnjgl [1.649716391s] Jan 27 12:56:37.257: INFO: Created: latency-svc-l2bvs Jan 27 12:56:37.262: INFO: Got endpoints: latency-svc-l2bvs [1.476349816s] Jan 27 12:56:37.304: INFO: Created: latency-svc-5mjx8 Jan 27 12:56:37.385: INFO: Got endpoints: latency-svc-5mjx8 [1.53255003s] Jan 27 12:56:37.404: INFO: Created: latency-svc-mbcn8 Jan 27 12:56:37.409: INFO: Got endpoints: latency-svc-mbcn8 [1.378681498s] Jan 27 12:56:37.444: INFO: Created: latency-svc-gzb22 Jan 27 12:56:37.451: INFO: Got endpoints: latency-svc-gzb22 [1.215424177s] Jan 27 12:56:37.478: INFO: Created: latency-svc-lgg2j Jan 27 12:56:37.479: INFO: Got endpoints: latency-svc-lgg2j [1.200687819s] Jan 27 12:56:37.572: INFO: Created: latency-svc-jfwcc Jan 27 12:56:37.585: INFO: Got endpoints: latency-svc-jfwcc [1.286178345s] Jan 27 12:56:37.619: INFO: Created: latency-svc-qxzxx Jan 27 12:56:37.641: INFO: Got endpoints: latency-svc-qxzxx [1.154291496s] Jan 27 12:56:37.644: INFO: Created: latency-svc-x65xv Jan 27 12:56:37.647: INFO: Got endpoints: latency-svc-x65xv [1.117783098s] Jan 27 12:56:37.757: INFO: Created: latency-svc-d6rzh Jan 27 12:56:37.784: INFO: Got endpoints: latency-svc-d6rzh [1.161572727s] Jan 27 12:56:37.815: INFO: Created: latency-svc-9297s Jan 27 12:56:37.819: INFO: Got endpoints: latency-svc-9297s [1.125979245s] Jan 27 12:56:37.897: INFO: Created: latency-svc-68bnc Jan 27 12:56:37.901: INFO: Got endpoints: latency-svc-68bnc [1.071917339s] Jan 27 12:56:37.948: INFO: Created: latency-svc-k5kv5 Jan 27 12:56:37.954: INFO: Got endpoints: latency-svc-k5kv5 [1.048935753s] Jan 27 12:56:37.976: INFO: Created: latency-svc-zljdp Jan 27 12:56:38.058: INFO: Got endpoints: latency-svc-zljdp [1.025858921s] Jan 27 12:56:38.062: INFO: Created: latency-svc-p7rjr Jan 27 12:56:38.103: INFO: Got endpoints: latency-svc-p7rjr [1.023357234s] Jan 27 12:56:38.147: INFO: Created: latency-svc-gf8vb Jan 27 12:56:38.225: INFO: Got endpoints: latency-svc-gf8vb [967.921677ms] Jan 27 12:56:38.234: INFO: Created: latency-svc-r4dl4 Jan 27 12:56:38.246: INFO: Got endpoints: latency-svc-r4dl4 [984.28534ms] Jan 27 12:56:38.275: INFO: Created: latency-svc-gcp4d Jan 27 12:56:38.281: INFO: Got endpoints: latency-svc-gcp4d [895.26866ms] Jan 27 12:56:38.308: INFO: Created: latency-svc-cpsv4 Jan 27 12:56:38.316: INFO: Got endpoints: latency-svc-cpsv4 [907.187006ms] Jan 27 12:56:38.316: INFO: Latencies: [155.409112ms 218.424411ms 234.972211ms 405.753683ms 590.058853ms 608.355276ms 781.924978ms 829.63823ms 895.26866ms 907.187006ms 949.505697ms 967.921677ms 984.28534ms 995.352572ms 1.023357234s 1.025858921s 1.048935753s 1.071917339s 1.117783098s 1.125979245s 1.138896138s 1.154291496s 1.161572727s 1.17005043s 1.18625247s 1.197604428s 1.200687819s 1.215424177s 1.232021125s 1.244972204s 1.24591629s 1.251149048s 1.258201482s 1.263091552s 1.26616447s 1.286178345s 1.286851235s 1.292962513s 1.294557687s 1.297503526s 1.305687236s 1.318767503s 1.326844906s 1.332057871s 1.333177866s 1.339014367s 1.343394062s 1.35569108s 1.374185919s 1.378378725s 1.378681498s 1.381511627s 1.385088121s 1.392254184s 1.393466004s 1.398316641s 1.406290433s 1.41193463s 1.424520315s 1.426820363s 1.430012347s 1.440094191s 1.440144373s 1.443125794s 1.446334533s 1.446885457s 1.453648255s 1.456307001s 1.45801636s 1.460844059s 1.463290736s 1.467746129s 1.468232245s 1.472822505s 1.473403163s 1.474647771s 1.474916701s 1.476349816s 1.484034288s 1.485673476s 1.485792909s 1.486084799s 1.486825305s 1.495480914s 1.50714913s 1.527303221s 1.53255003s 1.535614792s 1.541750428s 1.542596027s 1.551801465s 1.55521675s 1.555924297s 1.557726834s 1.561332876s 1.570210516s 1.573249339s 1.577301599s 1.57762069s 1.578474833s 1.581998411s 1.583542361s 1.584550247s 1.604289009s 1.607739415s 1.618646234s 1.631613298s 1.640231069s 1.645793171s 1.648430909s 1.649693896s 1.649716391s 1.657096274s 1.657434478s 1.659524907s 1.668554343s 1.68324734s 1.68720224s 1.691531358s 1.695122425s 1.695512127s 1.698149408s 1.705042754s 1.705155616s 1.706231933s 1.715991408s 1.717335898s 1.722910733s 1.728092274s 1.72814568s 1.729282745s 1.732426497s 1.734579467s 1.742867498s 1.745563411s 1.753588862s 1.764630801s 1.768781744s 1.792559092s 1.794196047s 1.795694248s 1.79813148s 1.80250119s 1.834089756s 1.852974803s 1.884377531s 1.898765893s 1.898783772s 1.899937332s 1.914848563s 1.921325459s 1.938680943s 1.954055403s 1.95598824s 1.963262215s 1.966313556s 1.971850318s 1.97661793s 1.981831049s 1.990735819s 1.992542173s 2.023704587s 2.027766278s 2.057819088s 2.058555619s 2.071711198s 2.074237991s 2.081079619s 2.087986904s 2.089031786s 2.096135364s 2.102197729s 2.102456244s 2.104030857s 2.104684452s 2.117379861s 2.120864856s 2.128528379s 2.131746231s 2.133301798s 2.146875785s 2.148528191s 2.165951675s 2.175507066s 2.178122592s 2.18833177s 2.226995479s 2.26770471s 2.283137589s 2.287945372s 2.288474392s 2.302838648s 2.326103797s 2.333278861s 2.367328169s 2.430740007s 2.47217289s 2.509254134s 2.519863666s 2.634378873s] Jan 27 12:56:38.316: INFO: 50 %ile: 1.581998411s Jan 27 12:56:38.316: INFO: 90 %ile: 2.146875785s Jan 27 12:56:38.316: INFO: 99 %ile: 2.519863666s Jan 27 12:56:38.316: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 12:56:38.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8306" for this suite. Jan 27 12:57:16.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 12:57:16.582: INFO: namespace svc-latency-8306 deletion completed in 38.139361379s • [SLOW TEST:68.467 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 12:57:16.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-41c48a89-3851-4269-9d72-c8e9e68d331b STEP: Creating a pod to test consume configMaps Jan 27 12:57:16.707: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2e251711-b1bd-4cd0-be4b-d4518e75c1f6" in namespace "projected-645" to be "success or failure" Jan 27 12:57:16.728: INFO: Pod "pod-projected-configmaps-2e251711-b1bd-4cd0-be4b-d4518e75c1f6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.875007ms Jan 27 12:57:18.741: INFO: Pod "pod-projected-configmaps-2e251711-b1bd-4cd0-be4b-d4518e75c1f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033608456s Jan 27 12:57:20.749: INFO: Pod "pod-projected-configmaps-2e251711-b1bd-4cd0-be4b-d4518e75c1f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042454101s Jan 27 12:57:22.762: INFO: Pod "pod-projected-configmaps-2e251711-b1bd-4cd0-be4b-d4518e75c1f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055263752s Jan 27 12:57:24.772: INFO: Pod "pod-projected-configmaps-2e251711-b1bd-4cd0-be4b-d4518e75c1f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065493264s STEP: Saw pod success Jan 27 12:57:24.773: INFO: Pod "pod-projected-configmaps-2e251711-b1bd-4cd0-be4b-d4518e75c1f6" satisfied condition "success or failure" Jan 27 12:57:24.777: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2e251711-b1bd-4cd0-be4b-d4518e75c1f6 container projected-configmap-volume-test: STEP: delete the pod Jan 27 12:57:24.896: INFO: Waiting for pod pod-projected-configmaps-2e251711-b1bd-4cd0-be4b-d4518e75c1f6 to disappear Jan 27 12:57:24.902: INFO: Pod pod-projected-configmaps-2e251711-b1bd-4cd0-be4b-d4518e75c1f6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 12:57:24.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-645" for this suite. Jan 27 12:57:30.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 12:57:31.087: INFO: namespace projected-645 deletion completed in 6.15567456s • [SLOW TEST:14.504 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 12:57:31.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-30cbec46-2f36-4e44-ba4f-15cf91402858 STEP: Creating a pod to test consume configMaps Jan 27 12:57:31.210: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-292ce340-e4d4-4d03-9f9b-8dec529355d8" in namespace "projected-7662" to be "success or failure" Jan 27 12:57:31.232: INFO: Pod "pod-projected-configmaps-292ce340-e4d4-4d03-9f9b-8dec529355d8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.470604ms Jan 27 12:57:33.242: INFO: Pod "pod-projected-configmaps-292ce340-e4d4-4d03-9f9b-8dec529355d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032733902s Jan 27 12:57:35.252: INFO: Pod "pod-projected-configmaps-292ce340-e4d4-4d03-9f9b-8dec529355d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042315395s Jan 27 12:57:37.266: INFO: Pod "pod-projected-configmaps-292ce340-e4d4-4d03-9f9b-8dec529355d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056676979s Jan 27 12:57:39.277: INFO: Pod "pod-projected-configmaps-292ce340-e4d4-4d03-9f9b-8dec529355d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067300966s Jan 27 12:57:41.285: INFO: Pod "pod-projected-configmaps-292ce340-e4d4-4d03-9f9b-8dec529355d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075186327s STEP: Saw pod success Jan 27 12:57:41.285: INFO: Pod "pod-projected-configmaps-292ce340-e4d4-4d03-9f9b-8dec529355d8" satisfied condition "success or failure" Jan 27 12:57:41.290: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-292ce340-e4d4-4d03-9f9b-8dec529355d8 container projected-configmap-volume-test: STEP: delete the pod Jan 27 12:57:41.353: INFO: Waiting for pod pod-projected-configmaps-292ce340-e4d4-4d03-9f9b-8dec529355d8 to disappear Jan 27 12:57:41.435: INFO: Pod pod-projected-configmaps-292ce340-e4d4-4d03-9f9b-8dec529355d8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 12:57:41.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7662" for this suite. Jan 27 12:57:47.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 12:57:47.629: INFO: namespace projected-7662 deletion completed in 6.183655314s • [SLOW TEST:16.540 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 12:57:47.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5308 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-5308 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5308 Jan 27 12:57:48.025: INFO: Found 0 stateful pods, waiting for 1 Jan 27 12:57:58.032: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 27 12:57:58.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 12:58:01.555: INFO: stderr: "I0127 12:58:00.954269 32 log.go:172] (0xc0008d0210) (0xc000788820) Create stream\nI0127 12:58:00.954543 32 log.go:172] (0xc0008d0210) (0xc000788820) Stream added, broadcasting: 1\nI0127 12:58:00.961698 32 log.go:172] (0xc0008d0210) Reply frame received for 1\nI0127 12:58:00.961750 32 log.go:172] (0xc0008d0210) (0xc0006f00a0) Create stream\nI0127 12:58:00.961760 32 log.go:172] (0xc0008d0210) (0xc0006f00a0) Stream added, broadcasting: 3\nI0127 12:58:00.962913 32 log.go:172] (0xc0008d0210) Reply frame received for 3\nI0127 12:58:00.962934 32 log.go:172] (0xc0008d0210) (0xc00037a000) Create stream\nI0127 12:58:00.962941 32 log.go:172] (0xc0008d0210) (0xc00037a000) Stream added, broadcasting: 5\nI0127 12:58:00.964360 32 log.go:172] (0xc0008d0210) Reply frame received for 5\nI0127 12:58:01.240201 32 log.go:172] (0xc0008d0210) Data frame received for 5\nI0127 12:58:01.240306 32 log.go:172] (0xc00037a000) (5) Data frame handling\nI0127 12:58:01.240339 32 log.go:172] (0xc00037a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0127 12:58:01.361200 32 log.go:172] (0xc0008d0210) Data frame received for 3\nI0127 12:58:01.361341 32 log.go:172] (0xc0006f00a0) (3) Data frame handling\nI0127 12:58:01.361370 32 log.go:172] (0xc0006f00a0) (3) Data frame sent\nI0127 12:58:01.547553 32 log.go:172] (0xc0008d0210) Data frame received for 1\nI0127 12:58:01.547816 32 log.go:172] (0xc0008d0210) (0xc00037a000) Stream removed, broadcasting: 5\nI0127 12:58:01.547876 32 log.go:172] (0xc000788820) (1) Data frame handling\nI0127 12:58:01.547901 32 log.go:172] (0xc000788820) (1) Data frame sent\nI0127 12:58:01.547984 32 log.go:172] (0xc0008d0210) (0xc0006f00a0) Stream removed, broadcasting: 3\nI0127 12:58:01.548032 32 log.go:172] (0xc0008d0210) (0xc000788820) Stream removed, broadcasting: 1\nI0127 12:58:01.548054 32 log.go:172] (0xc0008d0210) Go away received\nI0127 12:58:01.548711 32 log.go:172] (0xc0008d0210) (0xc000788820) Stream removed, broadcasting: 1\nI0127 12:58:01.548724 32 log.go:172] (0xc0008d0210) (0xc0006f00a0) Stream removed, broadcasting: 3\nI0127 12:58:01.548729 32 log.go:172] (0xc0008d0210) (0xc00037a000) Stream removed, broadcasting: 5\n" Jan 27 12:58:01.555: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 12:58:01.555: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 27 12:58:01.561: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 27 12:58:11.573: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 12:58:11.574: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 12:58:11.607: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 12:58:11.607: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC }] Jan 27 12:58:11.607: INFO: Jan 27 12:58:11.607: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 27 12:58:12.620: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982843438s Jan 27 12:58:13.893: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970413088s Jan 27 12:58:15.322: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.697754585s Jan 27 12:58:16.333: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.268576271s Jan 27 12:58:17.341: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.257337911s Jan 27 12:58:18.372: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.249880047s Jan 27 12:58:20.108: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.218594244s Jan 27 12:58:21.396: INFO: Verifying statefulset ss doesn't scale past 3 for another 482.269894ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5308 Jan 27 12:58:22.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 12:58:23.180: INFO: stderr: "I0127 12:58:22.828679 58 log.go:172] (0xc000708a50) (0xc000582820) Create stream\nI0127 12:58:22.828874 58 log.go:172] (0xc000708a50) (0xc000582820) Stream added, broadcasting: 1\nI0127 12:58:22.834913 58 log.go:172] (0xc000708a50) Reply frame received for 1\nI0127 12:58:22.834947 58 log.go:172] (0xc000708a50) (0xc000844000) Create stream\nI0127 12:58:22.834954 58 log.go:172] (0xc000708a50) (0xc000844000) Stream added, broadcasting: 3\nI0127 12:58:22.836336 58 log.go:172] (0xc000708a50) Reply frame received for 3\nI0127 12:58:22.836358 58 log.go:172] (0xc000708a50) (0xc0005828c0) Create stream\nI0127 12:58:22.836372 58 log.go:172] (0xc000708a50) (0xc0005828c0) Stream added, broadcasting: 5\nI0127 12:58:22.837874 58 log.go:172] (0xc000708a50) Reply frame received for 5\nI0127 12:58:23.003359 58 log.go:172] (0xc000708a50) Data frame received for 5\nI0127 12:58:23.003561 58 log.go:172] (0xc0005828c0) (5) Data frame handling\nI0127 12:58:23.003628 58 log.go:172] (0xc0005828c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0127 12:58:23.007223 58 log.go:172] (0xc000708a50) Data frame received for 3\nI0127 12:58:23.007276 58 log.go:172] (0xc000844000) (3) Data frame handling\nI0127 12:58:23.007334 58 log.go:172] (0xc000844000) (3) Data frame sent\nI0127 12:58:23.170349 58 log.go:172] (0xc000708a50) (0xc000844000) Stream removed, broadcasting: 3\nI0127 12:58:23.170669 58 log.go:172] (0xc000708a50) Data frame received for 1\nI0127 12:58:23.170697 58 log.go:172] (0xc000582820) (1) Data frame handling\nI0127 12:58:23.170714 58 log.go:172] (0xc000582820) (1) Data frame sent\nI0127 12:58:23.170732 58 log.go:172] (0xc000708a50) (0xc000582820) Stream removed, broadcasting: 1\nI0127 12:58:23.170797 58 log.go:172] (0xc000708a50) (0xc0005828c0) Stream removed, broadcasting: 5\nI0127 12:58:23.170935 58 log.go:172] (0xc000708a50) Go away received\nI0127 12:58:23.171928 58 log.go:172] (0xc000708a50) (0xc000582820) Stream removed, broadcasting: 1\nI0127 12:58:23.171939 58 log.go:172] (0xc000708a50) (0xc000844000) Stream removed, broadcasting: 3\nI0127 12:58:23.171949 58 log.go:172] (0xc000708a50) (0xc0005828c0) Stream removed, broadcasting: 5\n" Jan 27 12:58:23.181: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 27 12:58:23.181: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 27 12:58:23.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 12:58:23.335: INFO: rc: 1 Jan 27 12:58:23.335: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0028488a0 exit status 1 true [0xc00265c240 0xc00265c268 0xc00265c2a8] [0xc00265c240 0xc00265c268 0xc00265c2a8] [0xc00265c260 0xc00265c290] [0xba6c50 0xba6c50] 0xc002840fc0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 27 12:58:33.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 12:58:33.751: INFO: stderr: "I0127 12:58:33.487101 94 log.go:172] (0xc000116c60) (0xc0008dc5a0) Create stream\nI0127 12:58:33.487228 94 log.go:172] (0xc000116c60) (0xc0008dc5a0) Stream added, broadcasting: 1\nI0127 12:58:33.496505 94 log.go:172] (0xc000116c60) Reply frame received for 1\nI0127 12:58:33.496587 94 log.go:172] (0xc000116c60) (0xc0006be3c0) Create stream\nI0127 12:58:33.496603 94 log.go:172] (0xc000116c60) (0xc0006be3c0) Stream added, broadcasting: 3\nI0127 12:58:33.498056 94 log.go:172] (0xc000116c60) Reply frame received for 3\nI0127 12:58:33.498082 94 log.go:172] (0xc000116c60) (0xc0006be460) Create stream\nI0127 12:58:33.498090 94 log.go:172] (0xc000116c60) (0xc0006be460) Stream added, broadcasting: 5\nI0127 12:58:33.500656 94 log.go:172] (0xc000116c60) Reply frame received for 5\nI0127 12:58:33.609567 94 log.go:172] (0xc000116c60) Data frame received for 5\nI0127 12:58:33.609665 94 log.go:172] (0xc0006be460) (5) Data frame handling\nI0127 12:58:33.609686 94 log.go:172] (0xc0006be460) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0127 12:58:33.628040 94 log.go:172] (0xc000116c60) Data frame received for 3\nI0127 12:58:33.628158 94 log.go:172] (0xc0006be3c0) (3) Data frame handling\nI0127 12:58:33.628185 94 log.go:172] (0xc0006be3c0) (3) Data frame sent\nI0127 12:58:33.628250 94 log.go:172] (0xc000116c60) Data frame received for 5\nI0127 12:58:33.628273 94 log.go:172] (0xc0006be460) (5) Data frame handling\nI0127 12:58:33.628297 94 log.go:172] (0xc0006be460) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0127 12:58:33.744554 94 log.go:172] (0xc000116c60) Data frame received for 1\nI0127 12:58:33.744651 94 log.go:172] (0xc000116c60) (0xc0006be3c0) Stream removed, broadcasting: 3\nI0127 12:58:33.744817 94 log.go:172] (0xc0008dc5a0) (1) Data frame handling\nI0127 12:58:33.744849 94 log.go:172] (0xc0008dc5a0) (1) Data frame sent\nI0127 12:58:33.744876 94 log.go:172] (0xc000116c60) (0xc0006be460) Stream removed, broadcasting: 5\nI0127 12:58:33.745014 94 log.go:172] (0xc000116c60) (0xc0008dc5a0) Stream removed, broadcasting: 1\nI0127 12:58:33.745058 94 log.go:172] (0xc000116c60) Go away received\nI0127 12:58:33.745972 94 log.go:172] (0xc000116c60) (0xc0008dc5a0) Stream removed, broadcasting: 1\nI0127 12:58:33.745989 94 log.go:172] (0xc000116c60) (0xc0006be3c0) Stream removed, broadcasting: 3\nI0127 12:58:33.746005 94 log.go:172] (0xc000116c60) (0xc0006be460) Stream removed, broadcasting: 5\n" Jan 27 12:58:33.751: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 27 12:58:33.751: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 27 12:58:33.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 12:58:34.439: INFO: stderr: "I0127 12:58:34.013076 114 log.go:172] (0xc0008902c0) (0xc0007746e0) Create stream\nI0127 12:58:34.013528 114 log.go:172] (0xc0008902c0) (0xc0007746e0) Stream added, broadcasting: 1\nI0127 12:58:34.025408 114 log.go:172] (0xc0008902c0) Reply frame received for 1\nI0127 12:58:34.025494 114 log.go:172] (0xc0008902c0) (0xc0005a0320) Create stream\nI0127 12:58:34.025510 114 log.go:172] (0xc0008902c0) (0xc0005a0320) Stream added, broadcasting: 3\nI0127 12:58:34.028409 114 log.go:172] (0xc0008902c0) Reply frame received for 3\nI0127 12:58:34.028446 114 log.go:172] (0xc0008902c0) (0xc0003c0000) Create stream\nI0127 12:58:34.028454 114 log.go:172] (0xc0008902c0) (0xc0003c0000) Stream added, broadcasting: 5\nI0127 12:58:34.030591 114 log.go:172] (0xc0008902c0) Reply frame received for 5\nI0127 12:58:34.298689 114 log.go:172] (0xc0008902c0) Data frame received for 3\nI0127 12:58:34.298832 114 log.go:172] (0xc0005a0320) (3) Data frame handling\nI0127 12:58:34.298856 114 log.go:172] (0xc0005a0320) (3) Data frame sent\nI0127 12:58:34.298903 114 log.go:172] (0xc0008902c0) Data frame received for 5\nI0127 12:58:34.298918 114 log.go:172] (0xc0003c0000) (5) Data frame handling\nI0127 12:58:34.298926 114 log.go:172] (0xc0003c0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0127 12:58:34.427509 114 log.go:172] (0xc0008902c0) Data frame received for 1\nI0127 12:58:34.427940 114 log.go:172] (0xc0008902c0) (0xc0003c0000) Stream removed, broadcasting: 5\nI0127 12:58:34.428083 114 log.go:172] (0xc0007746e0) (1) Data frame handling\nI0127 12:58:34.428161 114 log.go:172] (0xc0007746e0) (1) Data frame sent\nI0127 12:58:34.428387 114 log.go:172] (0xc0008902c0) (0xc0005a0320) Stream removed, broadcasting: 3\nI0127 12:58:34.428482 114 log.go:172] (0xc0008902c0) (0xc0007746e0) Stream removed, broadcasting: 1\nI0127 12:58:34.428520 114 log.go:172] (0xc0008902c0) Go away received\nI0127 12:58:34.429905 114 log.go:172] (0xc0008902c0) (0xc0007746e0) Stream removed, broadcasting: 1\nI0127 12:58:34.430051 114 log.go:172] (0xc0008902c0) (0xc0005a0320) Stream removed, broadcasting: 3\nI0127 12:58:34.430061 114 log.go:172] (0xc0008902c0) (0xc0003c0000) Stream removed, broadcasting: 5\n" Jan 27 12:58:34.439: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 27 12:58:34.439: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 27 12:58:34.449: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 12:58:34.450: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 12:58:34.450: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 27 12:58:34.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 12:58:35.054: INFO: stderr: "I0127 12:58:34.744295 131 log.go:172] (0xc00083e0b0) (0xc00091c640) Create stream\nI0127 12:58:34.744655 131 log.go:172] (0xc00083e0b0) (0xc00091c640) Stream added, broadcasting: 1\nI0127 12:58:34.753537 131 log.go:172] (0xc00083e0b0) Reply frame received for 1\nI0127 12:58:34.753634 131 log.go:172] (0xc00083e0b0) (0xc000906000) Create stream\nI0127 12:58:34.753649 131 log.go:172] (0xc00083e0b0) (0xc000906000) Stream added, broadcasting: 3\nI0127 12:58:34.754835 131 log.go:172] (0xc00083e0b0) Reply frame received for 3\nI0127 12:58:34.754871 131 log.go:172] (0xc00083e0b0) (0xc00091c6e0) Create stream\nI0127 12:58:34.754877 131 log.go:172] (0xc00083e0b0) (0xc00091c6e0) Stream added, broadcasting: 5\nI0127 12:58:34.756232 131 log.go:172] (0xc00083e0b0) Reply frame received for 5\nI0127 12:58:34.915021 131 log.go:172] (0xc00083e0b0) Data frame received for 3\nI0127 12:58:34.915139 131 log.go:172] (0xc000906000) (3) Data frame handling\nI0127 12:58:34.915212 131 log.go:172] (0xc000906000) (3) Data frame sent\nI0127 12:58:34.916729 131 log.go:172] (0xc00083e0b0) Data frame received for 5\nI0127 12:58:34.916869 131 log.go:172] (0xc00091c6e0) (5) Data frame handling\nI0127 12:58:34.916961 131 log.go:172] (0xc00091c6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0127 12:58:35.047479 131 log.go:172] (0xc00083e0b0) Data frame received for 1\nI0127 12:58:35.047608 131 log.go:172] (0xc00083e0b0) (0xc000906000) Stream removed, broadcasting: 3\nI0127 12:58:35.047811 131 log.go:172] (0xc00091c640) (1) Data frame handling\nI0127 12:58:35.047844 131 log.go:172] (0xc00091c640) (1) Data frame sent\nI0127 12:58:35.047949 131 log.go:172] (0xc00083e0b0) (0xc00091c6e0) Stream removed, broadcasting: 5\nI0127 12:58:35.047979 131 log.go:172] (0xc00083e0b0) (0xc00091c640) Stream removed, broadcasting: 1\nI0127 12:58:35.048592 131 log.go:172] (0xc00083e0b0) (0xc00091c640) Stream removed, broadcasting: 1\nI0127 12:58:35.048603 131 log.go:172] (0xc00083e0b0) (0xc000906000) Stream removed, broadcasting: 3\nI0127 12:58:35.048615 131 log.go:172] (0xc00083e0b0) Go away received\nI0127 12:58:35.048646 131 log.go:172] (0xc00083e0b0) (0xc00091c6e0) Stream removed, broadcasting: 5\n" Jan 27 12:58:35.054: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 12:58:35.054: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 27 12:58:35.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 12:58:35.413: INFO: stderr: "I0127 12:58:35.209914 148 log.go:172] (0xc000116fd0) (0xc000546a00) Create stream\nI0127 12:58:35.210079 148 log.go:172] (0xc000116fd0) (0xc000546a00) Stream added, broadcasting: 1\nI0127 12:58:35.214248 148 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0127 12:58:35.214296 148 log.go:172] (0xc000116fd0) (0xc000546aa0) Create stream\nI0127 12:58:35.214304 148 log.go:172] (0xc000116fd0) (0xc000546aa0) Stream added, broadcasting: 3\nI0127 12:58:35.215520 148 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0127 12:58:35.216178 148 log.go:172] (0xc000116fd0) (0xc000766000) Create stream\nI0127 12:58:35.216293 148 log.go:172] (0xc000116fd0) (0xc000766000) Stream added, broadcasting: 5\nI0127 12:58:35.222199 148 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0127 12:58:35.281020 148 log.go:172] (0xc000116fd0) Data frame received for 5\nI0127 12:58:35.281079 148 log.go:172] (0xc000766000) (5) Data frame handling\nI0127 12:58:35.281107 148 log.go:172] (0xc000766000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0127 12:58:35.335189 148 log.go:172] (0xc000116fd0) Data frame received for 3\nI0127 12:58:35.335252 148 log.go:172] (0xc000546aa0) (3) Data frame handling\nI0127 12:58:35.335322 148 log.go:172] (0xc000546aa0) (3) Data frame sent\nI0127 12:58:35.406363 148 log.go:172] (0xc000116fd0) (0xc000766000) Stream removed, broadcasting: 5\nI0127 12:58:35.406571 148 log.go:172] (0xc000116fd0) Data frame received for 1\nI0127 12:58:35.406624 148 log.go:172] (0xc000116fd0) (0xc000546aa0) Stream removed, broadcasting: 3\nI0127 12:58:35.406727 148 log.go:172] (0xc000546a00) (1) Data frame handling\nI0127 12:58:35.406761 148 log.go:172] (0xc000546a00) (1) Data frame sent\nI0127 12:58:35.406776 148 log.go:172] (0xc000116fd0) (0xc000546a00) Stream removed, broadcasting: 1\nI0127 12:58:35.406794 148 log.go:172] (0xc000116fd0) Go away received\nI0127 12:58:35.407471 148 log.go:172] (0xc000116fd0) (0xc000546a00) Stream removed, broadcasting: 1\nI0127 12:58:35.407483 148 log.go:172] (0xc000116fd0) (0xc000546aa0) Stream removed, broadcasting: 3\nI0127 12:58:35.407489 148 log.go:172] (0xc000116fd0) (0xc000766000) Stream removed, broadcasting: 5\n" Jan 27 12:58:35.413: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 12:58:35.413: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 27 12:58:35.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 12:58:35.888: INFO: stderr: "I0127 12:58:35.584751 171 log.go:172] (0xc000a404d0) (0xc00033c820) Create stream\nI0127 12:58:35.584896 171 log.go:172] (0xc000a404d0) (0xc00033c820) Stream added, broadcasting: 1\nI0127 12:58:35.599053 171 log.go:172] (0xc000a404d0) Reply frame received for 1\nI0127 12:58:35.599118 171 log.go:172] (0xc000a404d0) (0xc00033c000) Create stream\nI0127 12:58:35.599133 171 log.go:172] (0xc000a404d0) (0xc00033c000) Stream added, broadcasting: 3\nI0127 12:58:35.602928 171 log.go:172] (0xc000a404d0) Reply frame received for 3\nI0127 12:58:35.603025 171 log.go:172] (0xc000a404d0) (0xc000604280) Create stream\nI0127 12:58:35.603049 171 log.go:172] (0xc000a404d0) (0xc000604280) Stream added, broadcasting: 5\nI0127 12:58:35.604798 171 log.go:172] (0xc000a404d0) Reply frame received for 5\nI0127 12:58:35.705476 171 log.go:172] (0xc000a404d0) Data frame received for 5\nI0127 12:58:35.705593 171 log.go:172] (0xc000604280) (5) Data frame handling\nI0127 12:58:35.705636 171 log.go:172] (0xc000604280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0127 12:58:35.755875 171 log.go:172] (0xc000a404d0) Data frame received for 3\nI0127 12:58:35.755956 171 log.go:172] (0xc00033c000) (3) Data frame handling\nI0127 12:58:35.755993 171 log.go:172] (0xc00033c000) (3) Data frame sent\nI0127 12:58:35.877617 171 log.go:172] (0xc000a404d0) Data frame received for 1\nI0127 12:58:35.877747 171 log.go:172] (0xc000a404d0) (0xc00033c000) Stream removed, broadcasting: 3\nI0127 12:58:35.877833 171 log.go:172] (0xc00033c820) (1) Data frame handling\nI0127 12:58:35.877853 171 log.go:172] (0xc00033c820) (1) Data frame sent\nI0127 12:58:35.877915 171 log.go:172] (0xc000a404d0) (0xc000604280) Stream removed, broadcasting: 5\nI0127 12:58:35.877945 171 log.go:172] (0xc000a404d0) (0xc00033c820) Stream removed, broadcasting: 1\nI0127 12:58:35.877996 171 log.go:172] (0xc000a404d0) Go away received\nI0127 12:58:35.878863 171 log.go:172] (0xc000a404d0) (0xc00033c820) Stream removed, broadcasting: 1\nI0127 12:58:35.878883 171 log.go:172] (0xc000a404d0) (0xc00033c000) Stream removed, broadcasting: 3\nI0127 12:58:35.878896 171 log.go:172] (0xc000a404d0) (0xc000604280) Stream removed, broadcasting: 5\n" Jan 27 12:58:35.888: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 12:58:35.888: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 27 12:58:35.888: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 12:58:35.894: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 27 12:58:45.932: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 12:58:45.932: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 27 12:58:45.932: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 27 12:58:45.958: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 12:58:45.958: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC }] Jan 27 12:58:45.958: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:45.958: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:45.958: INFO: Jan 27 12:58:45.958: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 12:58:47.194: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 12:58:47.194: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC }] Jan 27 12:58:47.194: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:47.194: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:47.194: INFO: Jan 27 12:58:47.194: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 12:58:48.200: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 12:58:48.200: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC }] Jan 27 12:58:48.200: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:48.200: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:48.200: INFO: Jan 27 12:58:48.200: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 12:58:49.210: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 12:58:49.210: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC }] Jan 27 12:58:49.210: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:49.210: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:49.210: INFO: Jan 27 12:58:49.210: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 12:58:50.226: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 12:58:50.226: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC }] Jan 27 12:58:50.226: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:50.226: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:50.226: INFO: Jan 27 12:58:50.226: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 12:58:51.251: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 12:58:51.251: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC }] Jan 27 12:58:51.252: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:51.252: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:51.252: INFO: Jan 27 12:58:51.252: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 12:58:52.265: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 12:58:52.265: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC }] Jan 27 12:58:52.265: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:52.265: INFO: Jan 27 12:58:52.265: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 27 12:58:53.282: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 12:58:53.282: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC }] Jan 27 12:58:53.282: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:53.283: INFO: Jan 27 12:58:53.283: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 27 12:58:54.292: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 12:58:54.292: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC }] Jan 27 12:58:54.292: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:54.292: INFO: Jan 27 12:58:54.292: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 27 12:58:55.306: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 12:58:55.306: INFO: ss-0 iruya-node Pending 0s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:57:48 +0000 UTC }] Jan 27 12:58:55.306: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:11 +0000 UTC }] Jan 27 12:58:55.306: INFO: Jan 27 12:58:55.306: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5308 Jan 27 12:58:56.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 12:58:56.547: INFO: rc: 1 Jan 27 12:58:56.547: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001ef2090 exit status 1 true [0xc001222188 0xc0012221a0 0xc0012221b8] [0xc001222188 0xc0012221a0 0xc0012221b8] [0xc001222198 0xc0012221b0] [0xba6c50 0xba6c50] 0xc002d5b200 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 27 12:59:06.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 12:59:06.685: INFO: rc: 1 Jan 27 12:59:06.685: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00318b170 exit status 1 true [0xc0011ea1e8 0xc0011ea200 0xc0011ea218] [0xc0011ea1e8 0xc0011ea200 0xc0011ea218] [0xc0011ea1f8 0xc0011ea210] [0xba6c50 0xba6c50] 0xc00274bc20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 12:59:16.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 12:59:16.825: INFO: rc: 1 Jan 27 12:59:16.825: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0026577d0 exit status 1 true [0xc000741d28 0xc000741d80 0xc000741dc0] [0xc000741d28 0xc000741d80 0xc000741dc0] [0xc000741d40 0xc000741db0] [0xba6c50 0xba6c50] 0xc002dd91a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 12:59:26.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 12:59:26.968: INFO: rc: 1 Jan 27 12:59:26.969: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00318b260 exit status 1 true [0xc0011ea220 0xc0011ea238 0xc0011ea250] [0xc0011ea220 0xc0011ea238 0xc0011ea250] [0xc0011ea230 0xc0011ea248] [0xba6c50 0xba6c50] 0xc00274bf80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 12:59:36.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 12:59:37.182: INFO: rc: 1 Jan 27 12:59:37.182: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00318b350 exit status 1 true [0xc0011ea258 0xc0011ea270 0xc0011ea288] [0xc0011ea258 0xc0011ea270 0xc0011ea288] [0xc0011ea268 0xc0011ea280] [0xba6c50 0xba6c50] 0xc0024c2300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 12:59:47.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 12:59:47.341: INFO: rc: 1 Jan 27 12:59:47.341: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ef2180 exit status 1 true [0xc0012221c0 0xc0012221d8 0xc0012221f0] [0xc0012221c0 0xc0012221d8 0xc0012221f0] [0xc0012221d0 0xc0012221e8] [0xba6c50 0xba6c50] 0xc002d5b5c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 12:59:57.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 12:59:57.470: INFO: rc: 1 Jan 27 12:59:57.470: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00318b410 exit status 1 true [0xc0011ea290 0xc0011ea2a8 0xc0011ea2c0] [0xc0011ea290 0xc0011ea2a8 0xc0011ea2c0] [0xc0011ea2a0 0xc0011ea2b8] [0xba6c50 0xba6c50] 0xc0024c2660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:00:07.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:00:07.683: INFO: rc: 1 Jan 27 13:00:07.684: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002657950 exit status 1 true [0xc000741de8 0xc000741e60 0xc000741ea0] [0xc000741de8 0xc000741e60 0xc000741ea0] [0xc000741e40 0xc000741e90] [0xba6c50 0xba6c50] 0xc002dd98c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:00:17.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:00:17.826: INFO: rc: 1 Jan 27 13:00:17.826: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002e60090 exit status 1 true [0xc000010010 0xc000740d78 0xc0007410c0] [0xc000010010 0xc000740d78 0xc0007410c0] [0xc000740cf8 0xc000741040] [0xba6c50 0xba6c50] 0xc00274a240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:00:27.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:00:27.948: INFO: rc: 1 Jan 27 13:00:27.948: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f92090 exit status 1 true [0xc0011ea000 0xc0011ea018 0xc0011ea030] [0xc0011ea000 0xc0011ea018 0xc0011ea030] [0xc0011ea010 0xc0011ea028] [0xba6c50 0xba6c50] 0xc002e907e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:00:37.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:00:38.202: INFO: rc: 1 Jan 27 13:00:38.203: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0028480f0 exit status 1 true [0xc00265c010 0xc00265c050 0xc00265c078] [0xc00265c010 0xc00265c050 0xc00265c078] [0xc00265c048 0xc00265c070] [0xba6c50 0xba6c50] 0xc002dd8360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:00:48.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:00:48.336: INFO: rc: 1 Jan 27 13:00:48.336: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0010900c0 exit status 1 true [0xc001222000 0xc001222018 0xc001222030] [0xc001222000 0xc001222018 0xc001222030] [0xc001222010 0xc001222028] [0xba6c50 0xba6c50] 0xc0024c2300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:00:58.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:00:58.458: INFO: rc: 1 Jan 27 13:00:58.459: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001090180 exit status 1 true [0xc001222038 0xc001222050 0xc001222068] [0xc001222038 0xc001222050 0xc001222068] [0xc001222048 0xc001222060] [0xba6c50 0xba6c50] 0xc0024c2660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:01:08.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:01:08.595: INFO: rc: 1 Jan 27 13:01:08.596: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002e60150 exit status 1 true [0xc0007411a0 0xc000741348 0xc0007415b8] [0xc0007411a0 0xc000741348 0xc0007415b8] [0xc000741240 0xc000741490] [0xba6c50 0xba6c50] 0xc00274a540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:01:18.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:01:18.731: INFO: rc: 1 Jan 27 13:01:18.732: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0028481e0 exit status 1 true [0xc00265c098 0xc00265c0c0 0xc00265c100] [0xc00265c098 0xc00265c0c0 0xc00265c100] [0xc00265c0b8 0xc00265c0e8] [0xba6c50 0xba6c50] 0xc002dd8ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:01:28.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:01:28.954: INFO: rc: 1 Jan 27 13:01:28.954: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f921b0 exit status 1 true [0xc0011ea038 0xc0011ea050 0xc0011ea068] [0xc0011ea038 0xc0011ea050 0xc0011ea068] [0xc0011ea048 0xc0011ea060] [0xba6c50 0xba6c50] 0xc002e91320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:01:38.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:01:39.098: INFO: rc: 1 Jan 27 13:01:39.098: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f922a0 exit status 1 true [0xc0011ea070 0xc0011ea088 0xc0011ea0a0] [0xc0011ea070 0xc0011ea088 0xc0011ea0a0] [0xc0011ea080 0xc0011ea098] [0xba6c50 0xba6c50] 0xc002e91f80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:01:49.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:01:49.253: INFO: rc: 1 Jan 27 13:01:49.254: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0028482d0 exit status 1 true [0xc00265c108 0xc00265c138 0xc00265c160] [0xc00265c108 0xc00265c138 0xc00265c160] [0xc00265c118 0xc00265c158] [0xba6c50 0xba6c50] 0xc002dd8ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:01:59.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:01:59.380: INFO: rc: 1 Jan 27 13:01:59.380: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002848390 exit status 1 true [0xc00265c168 0xc00265c1a8 0xc00265c1f8] [0xc00265c168 0xc00265c1a8 0xc00265c1f8] [0xc00265c1a0 0xc00265c1e0] [0xba6c50 0xba6c50] 0xc002dd9380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:02:09.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:02:09.558: INFO: rc: 1 Jan 27 13:02:09.559: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002848450 exit status 1 true [0xc00265c200 0xc00265c218 0xc00265c258] [0xc00265c200 0xc00265c218 0xc00265c258] [0xc00265c210 0xc00265c240] [0xba6c50 0xba6c50] 0xc002dd9aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:02:19.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:02:19.741: INFO: rc: 1 Jan 27 13:02:19.741: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001090090 exit status 1 true [0xc00051c038 0xc001222010 0xc001222028] [0xc00051c038 0xc001222010 0xc001222028] [0xc001222008 0xc001222020] [0xba6c50 0xba6c50] 0xc002e907e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:02:29.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:02:29.953: INFO: rc: 1 Jan 27 13:02:29.953: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0010901e0 exit status 1 true [0xc001222030 0xc001222048 0xc001222060] [0xc001222030 0xc001222048 0xc001222060] [0xc001222040 0xc001222058] [0xba6c50 0xba6c50] 0xc002e91320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:02:39.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:02:40.116: INFO: rc: 1 Jan 27 13:02:40.116: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f920c0 exit status 1 true [0xc0011ea000 0xc0011ea018 0xc0011ea030] [0xc0011ea000 0xc0011ea018 0xc0011ea030] [0xc0011ea010 0xc0011ea028] [0xba6c50 0xba6c50] 0xc0024c2300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:02:50.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:02:50.287: INFO: rc: 1 Jan 27 13:02:50.287: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f921e0 exit status 1 true [0xc0011ea038 0xc0011ea050 0xc0011ea068] [0xc0011ea038 0xc0011ea050 0xc0011ea068] [0xc0011ea048 0xc0011ea060] [0xba6c50 0xba6c50] 0xc0024c2660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:03:00.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:03:00.432: INFO: rc: 1 Jan 27 13:03:00.433: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002e600f0 exit status 1 true [0xc000740af0 0xc000740f78 0xc0007411a0] [0xc000740af0 0xc000740f78 0xc0007411a0] [0xc000740d78 0xc0007410c0] [0xba6c50 0xba6c50] 0xc0028403c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:03:10.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:03:10.611: INFO: rc: 1 Jan 27 13:03:10.612: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0028480c0 exit status 1 true [0xc00265c010 0xc00265c050 0xc00265c078] [0xc00265c010 0xc00265c050 0xc00265c078] [0xc00265c048 0xc00265c070] [0xba6c50 0xba6c50] 0xc00274a240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:03:20.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:03:20.797: INFO: rc: 1 Jan 27 13:03:20.797: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002848210 exit status 1 true [0xc00265c098 0xc00265c0c0 0xc00265c100] [0xc00265c098 0xc00265c0c0 0xc00265c100] [0xc00265c0b8 0xc00265c0e8] [0xba6c50 0xba6c50] 0xc00274a540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:03:30.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:03:30.935: INFO: rc: 1 Jan 27 13:03:30.935: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001090300 exit status 1 true [0xc001222068 0xc001222080 0xc001222098] [0xc001222068 0xc001222080 0xc001222098] [0xc001222078 0xc001222090] [0xba6c50 0xba6c50] 0xc002e91f80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:03:40.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:03:41.141: INFO: rc: 1 Jan 27 13:03:41.141: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002e60270 exit status 1 true [0xc000741220 0xc0007413e0 0xc000741670] [0xc000741220 0xc0007413e0 0xc000741670] [0xc000741348 0xc0007415b8] [0xba6c50 0xba6c50] 0xc0028407e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:03:51.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:03:51.269: INFO: rc: 1 Jan 27 13:03:51.270: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002848300 exit status 1 true [0xc00265c108 0xc00265c138 0xc00265c160] [0xc00265c108 0xc00265c138 0xc00265c160] [0xc00265c118 0xc00265c158] [0xba6c50 0xba6c50] 0xc00274a8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 27 13:04:01.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5308 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:04:01.492: INFO: rc: 1 Jan 27 13:04:01.493: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Jan 27 13:04:01.493: INFO: Scaling statefulset ss to 0 Jan 27 13:04:01.501: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 27 13:04:01.503: INFO: Deleting all statefulset in ns statefulset-5308 Jan 27 13:04:01.505: INFO: Scaling statefulset ss to 0 Jan 27 13:04:01.515: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 13:04:01.517: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:04:01.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5308" for this suite. Jan 27 13:04:07.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:04:07.754: INFO: namespace statefulset-5308 deletion completed in 6.220541375s • [SLOW TEST:380.124 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:04:07.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-adf9625e-e0f4-4b63-9958-12ed84ebda9a STEP: Creating a pod to test consume configMaps Jan 27 13:04:07.890: INFO: Waiting up to 5m0s for pod "pod-configmaps-e6076182-42f7-4478-a9a7-d1004e297ed0" in namespace "configmap-8366" to be "success or failure" Jan 27 13:04:07.898: INFO: Pod "pod-configmaps-e6076182-42f7-4478-a9a7-d1004e297ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.961453ms Jan 27 13:04:09.904: INFO: Pod "pod-configmaps-e6076182-42f7-4478-a9a7-d1004e297ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013995482s Jan 27 13:04:11.912: INFO: Pod "pod-configmaps-e6076182-42f7-4478-a9a7-d1004e297ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022171209s Jan 27 13:04:13.927: INFO: Pod "pod-configmaps-e6076182-42f7-4478-a9a7-d1004e297ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037135653s Jan 27 13:04:15.940: INFO: Pod "pod-configmaps-e6076182-42f7-4478-a9a7-d1004e297ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049614661s Jan 27 13:04:17.955: INFO: Pod "pod-configmaps-e6076182-42f7-4478-a9a7-d1004e297ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064817311s Jan 27 13:04:19.971: INFO: Pod "pod-configmaps-e6076182-42f7-4478-a9a7-d1004e297ed0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.080971532s STEP: Saw pod success Jan 27 13:04:19.971: INFO: Pod "pod-configmaps-e6076182-42f7-4478-a9a7-d1004e297ed0" satisfied condition "success or failure" Jan 27 13:04:19.975: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e6076182-42f7-4478-a9a7-d1004e297ed0 container configmap-volume-test: STEP: delete the pod Jan 27 13:04:20.065: INFO: Waiting for pod pod-configmaps-e6076182-42f7-4478-a9a7-d1004e297ed0 to disappear Jan 27 13:04:20.121: INFO: Pod pod-configmaps-e6076182-42f7-4478-a9a7-d1004e297ed0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:04:20.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8366" for this suite. Jan 27 13:04:26.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:04:26.289: INFO: namespace configmap-8366 deletion completed in 6.155066392s • [SLOW TEST:18.535 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:04:26.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9341 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9341 STEP: Creating statefulset with conflicting port in namespace statefulset-9341 STEP: Waiting until pod test-pod will start running in namespace statefulset-9341 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9341 Jan 27 13:04:40.563: INFO: Observed stateful pod in namespace: statefulset-9341, name: ss-0, uid: 9b4794f7-b323-47d8-a503-fd0356ba16b2, status phase: Pending. Waiting for statefulset controller to delete. Jan 27 13:04:40.661: INFO: Observed stateful pod in namespace: statefulset-9341, name: ss-0, uid: 9b4794f7-b323-47d8-a503-fd0356ba16b2, status phase: Failed. Waiting for statefulset controller to delete. Jan 27 13:04:40.682: INFO: Observed stateful pod in namespace: statefulset-9341, name: ss-0, uid: 9b4794f7-b323-47d8-a503-fd0356ba16b2, status phase: Failed. Waiting for statefulset controller to delete. Jan 27 13:04:40.693: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9341 STEP: Removing pod with conflicting port in namespace statefulset-9341 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9341 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 27 13:04:56.889: INFO: Deleting all statefulset in ns statefulset-9341 Jan 27 13:04:56.895: INFO: Scaling statefulset ss to 0 Jan 27 13:05:06.958: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 13:05:06.966: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:05:06.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9341" for this suite. Jan 27 13:05:13.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:05:13.156: INFO: namespace statefulset-9341 deletion completed in 6.151712638s • [SLOW TEST:46.866 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:05:13.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:05:23.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7450" for this suite. Jan 27 13:06:09.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:06:09.647: INFO: namespace kubelet-test-7450 deletion completed in 46.249640224s • [SLOW TEST:56.491 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:06:09.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-zttv STEP: Creating a pod to test atomic-volume-subpath Jan 27 13:06:09.800: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zttv" in namespace "subpath-4971" to be "success or failure" Jan 27 13:06:09.814: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Pending", Reason="", readiness=false. Elapsed: 13.41091ms Jan 27 13:06:11.825: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024717556s Jan 27 13:06:13.835: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03409048s Jan 27 13:06:15.841: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040646439s Jan 27 13:06:17.875: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074237308s Jan 27 13:06:19.881: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.080808166s Jan 27 13:06:21.891: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Running", Reason="", readiness=true. Elapsed: 12.090260966s Jan 27 13:06:23.903: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Running", Reason="", readiness=true. Elapsed: 14.102085344s Jan 27 13:06:25.911: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Running", Reason="", readiness=true. Elapsed: 16.110652404s Jan 27 13:06:27.918: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Running", Reason="", readiness=true. Elapsed: 18.117158887s Jan 27 13:06:29.932: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Running", Reason="", readiness=true. Elapsed: 20.131686356s Jan 27 13:06:31.944: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Running", Reason="", readiness=true. Elapsed: 22.143116357s Jan 27 13:06:34.000: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Running", Reason="", readiness=true. Elapsed: 24.199071863s Jan 27 13:06:36.008: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Running", Reason="", readiness=true. Elapsed: 26.20705406s Jan 27 13:06:38.016: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Running", Reason="", readiness=true. Elapsed: 28.215741812s Jan 27 13:06:40.026: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Running", Reason="", readiness=true. Elapsed: 30.225744494s Jan 27 13:06:42.114: INFO: Pod "pod-subpath-test-downwardapi-zttv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.312966999s STEP: Saw pod success Jan 27 13:06:42.114: INFO: Pod "pod-subpath-test-downwardapi-zttv" satisfied condition "success or failure" Jan 27 13:06:42.120: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-zttv container test-container-subpath-downwardapi-zttv: STEP: delete the pod Jan 27 13:06:42.190: INFO: Waiting for pod pod-subpath-test-downwardapi-zttv to disappear Jan 27 13:06:42.316: INFO: Pod pod-subpath-test-downwardapi-zttv no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-zttv Jan 27 13:06:42.316: INFO: Deleting pod "pod-subpath-test-downwardapi-zttv" in namespace "subpath-4971" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:06:42.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4971" for this suite. Jan 27 13:06:48.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:06:48.461: INFO: namespace subpath-4971 deletion completed in 6.134131662s • [SLOW TEST:38.813 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:06:48.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 27 13:06:48.519: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:07:06.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3000" for this suite. Jan 27 13:07:28.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:07:28.539: INFO: namespace init-container-3000 deletion completed in 22.209949656s • [SLOW TEST:40.077 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:07:28.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-9d5f1e91-97ce-4271-9a22-6f1cfcd94af8 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:07:40.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5922" for this suite. Jan 27 13:08:02.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:08:03.018: INFO: namespace configmap-5922 deletion completed in 22.204169842s • [SLOW TEST:34.479 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:08:03.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-94d6 STEP: Creating a pod to test atomic-volume-subpath Jan 27 13:08:03.207: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-94d6" in namespace "subpath-7060" to be "success or failure" Jan 27 13:08:03.219: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.760641ms Jan 27 13:08:05.271: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064353869s Jan 27 13:08:07.280: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07266335s Jan 27 13:08:09.301: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09422669s Jan 27 13:08:11.309: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102055389s Jan 27 13:08:13.320: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.112854414s Jan 27 13:08:15.381: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Running", Reason="", readiness=true. Elapsed: 12.174181417s Jan 27 13:08:17.388: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Running", Reason="", readiness=true. Elapsed: 14.180801889s Jan 27 13:08:19.425: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Running", Reason="", readiness=true. Elapsed: 16.217540997s Jan 27 13:08:21.441: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Running", Reason="", readiness=true. Elapsed: 18.23447517s Jan 27 13:08:23.458: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Running", Reason="", readiness=true. Elapsed: 20.250757943s Jan 27 13:08:25.474: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Running", Reason="", readiness=true. Elapsed: 22.266902901s Jan 27 13:08:27.495: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Running", Reason="", readiness=true. Elapsed: 24.287929578s Jan 27 13:08:29.621: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Running", Reason="", readiness=true. Elapsed: 26.414188134s Jan 27 13:08:31.637: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Running", Reason="", readiness=true. Elapsed: 28.430035515s Jan 27 13:08:33.651: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Running", Reason="", readiness=true. Elapsed: 30.4439953s Jan 27 13:08:35.661: INFO: Pod "pod-subpath-test-configmap-94d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.454508738s STEP: Saw pod success Jan 27 13:08:35.662: INFO: Pod "pod-subpath-test-configmap-94d6" satisfied condition "success or failure" Jan 27 13:08:35.668: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-94d6 container test-container-subpath-configmap-94d6: STEP: delete the pod Jan 27 13:08:35.735: INFO: Waiting for pod pod-subpath-test-configmap-94d6 to disappear Jan 27 13:08:35.749: INFO: Pod pod-subpath-test-configmap-94d6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-94d6 Jan 27 13:08:35.749: INFO: Deleting pod "pod-subpath-test-configmap-94d6" in namespace "subpath-7060" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:08:35.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7060" for this suite. Jan 27 13:08:41.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:08:42.050: INFO: namespace subpath-7060 deletion completed in 6.207592964s • [SLOW TEST:39.031 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:08:42.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:08:54.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5485" for this suite. Jan 27 13:09:00.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:09:00.580: INFO: namespace kubelet-test-5485 deletion completed in 6.17641571s • [SLOW TEST:18.529 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:09:00.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 27 13:09:00.807: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4533cb6-98be-45a3-aa30-8665afa7bd83" in namespace "downward-api-9117" to be "success or failure" Jan 27 13:09:00.871: INFO: Pod "downwardapi-volume-a4533cb6-98be-45a3-aa30-8665afa7bd83": Phase="Pending", Reason="", readiness=false. Elapsed: 64.10158ms Jan 27 13:09:02.921: INFO: Pod "downwardapi-volume-a4533cb6-98be-45a3-aa30-8665afa7bd83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114069916s Jan 27 13:09:04.999: INFO: Pod "downwardapi-volume-a4533cb6-98be-45a3-aa30-8665afa7bd83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192237637s Jan 27 13:09:07.007: INFO: Pod "downwardapi-volume-a4533cb6-98be-45a3-aa30-8665afa7bd83": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200114372s Jan 27 13:09:09.023: INFO: Pod "downwardapi-volume-a4533cb6-98be-45a3-aa30-8665afa7bd83": Phase="Pending", Reason="", readiness=false. Elapsed: 8.2164892s Jan 27 13:09:11.053: INFO: Pod "downwardapi-volume-a4533cb6-98be-45a3-aa30-8665afa7bd83": Phase="Pending", Reason="", readiness=false. Elapsed: 10.246594555s Jan 27 13:09:13.076: INFO: Pod "downwardapi-volume-a4533cb6-98be-45a3-aa30-8665afa7bd83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.269613056s STEP: Saw pod success Jan 27 13:09:13.076: INFO: Pod "downwardapi-volume-a4533cb6-98be-45a3-aa30-8665afa7bd83" satisfied condition "success or failure" Jan 27 13:09:13.082: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a4533cb6-98be-45a3-aa30-8665afa7bd83 container client-container: STEP: delete the pod Jan 27 13:09:13.145: INFO: Waiting for pod downwardapi-volume-a4533cb6-98be-45a3-aa30-8665afa7bd83 to disappear Jan 27 13:09:13.158: INFO: Pod downwardapi-volume-a4533cb6-98be-45a3-aa30-8665afa7bd83 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:09:13.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9117" for this suite. Jan 27 13:09:19.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:09:19.389: INFO: namespace downward-api-9117 deletion completed in 6.225426352s • [SLOW TEST:18.809 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:09:19.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 27 13:12:23.912: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:23.963: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:25.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:25.971: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:27.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:27.973: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:29.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:29.970: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:31.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:31.971: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:33.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:33.975: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:35.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:35.974: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:37.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:37.971: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:39.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:39.973: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:41.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:42.058: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:43.964: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:44.000: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:45.966: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:45.973: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:47.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:47.974: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:49.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:50.001: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:51.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:51.972: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:53.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:53.976: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:55.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:55.975: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:57.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:57.970: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:12:59.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:12:59.972: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:01.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:01.972: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:03.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:03.977: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:05.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:05.973: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:07.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:07.972: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:09.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:09.978: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:11.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:11.975: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:13.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:13.975: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:15.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:15.973: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:17.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:17.977: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:19.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:19.976: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:21.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:21.976: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:23.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:23.976: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:25.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:25.975: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:27.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:27.974: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:29.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:29.972: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:31.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:31.977: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:33.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:33.977: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:35.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:35.971: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:37.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:37.969: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:39.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:39.973: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:41.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:41.973: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:43.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:43.983: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:45.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:45.972: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:47.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:47.971: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:49.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:49.972: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:51.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:51.970: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:53.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:53.991: INFO: Pod pod-with-poststart-exec-hook still exists Jan 27 13:13:55.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 27 13:13:55.979: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:13:55.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6616" for this suite. Jan 27 13:14:18.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:14:18.143: INFO: namespace container-lifecycle-hook-6616 deletion completed in 22.122081883s • [SLOW TEST:298.753 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:14:18.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 27 13:14:18.588: INFO: Waiting up to 5m0s for pod "pod-cec5ae83-2562-4560-ad6d-9fdf05b92fb5" in namespace "emptydir-5527" to be "success or failure" Jan 27 13:14:18.650: INFO: Pod "pod-cec5ae83-2562-4560-ad6d-9fdf05b92fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 62.664605ms Jan 27 13:14:20.661: INFO: Pod "pod-cec5ae83-2562-4560-ad6d-9fdf05b92fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073639253s Jan 27 13:14:22.758: INFO: Pod "pod-cec5ae83-2562-4560-ad6d-9fdf05b92fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170722681s Jan 27 13:14:24.777: INFO: Pod "pod-cec5ae83-2562-4560-ad6d-9fdf05b92fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.189029832s Jan 27 13:14:26.783: INFO: Pod "pod-cec5ae83-2562-4560-ad6d-9fdf05b92fb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.19543685s STEP: Saw pod success Jan 27 13:14:26.783: INFO: Pod "pod-cec5ae83-2562-4560-ad6d-9fdf05b92fb5" satisfied condition "success or failure" Jan 27 13:14:26.787: INFO: Trying to get logs from node iruya-node pod pod-cec5ae83-2562-4560-ad6d-9fdf05b92fb5 container test-container: STEP: delete the pod Jan 27 13:14:26.828: INFO: Waiting for pod pod-cec5ae83-2562-4560-ad6d-9fdf05b92fb5 to disappear Jan 27 13:14:26.840: INFO: Pod pod-cec5ae83-2562-4560-ad6d-9fdf05b92fb5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:14:26.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5527" for this suite. Jan 27 13:14:32.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:14:32.991: INFO: namespace emptydir-5527 deletion completed in 6.146320968s • [SLOW TEST:14.847 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:14:32.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 27 13:14:33.285: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"05c9950a-6dab-4acb-84d5-2cdff8a1fc99", Controller:(*bool)(0xc00312139a), BlockOwnerDeletion:(*bool)(0xc00312139b)}} Jan 27 13:14:33.398: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7dfa2dca-0a95-4754-bbee-43bbcec426e0", Controller:(*bool)(0xc0031216fa), BlockOwnerDeletion:(*bool)(0xc0031216fb)}} Jan 27 13:14:33.433: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"94ae17ea-aa52-4f5d-a36e-99fbe91421ea", Controller:(*bool)(0xc00312189a), BlockOwnerDeletion:(*bool)(0xc00312189b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:14:38.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8840" for this suite. Jan 27 13:14:44.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:14:44.674: INFO: namespace gc-8840 deletion completed in 6.198238715s • [SLOW TEST:11.681 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:14:44.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 27 13:14:44.778: INFO: Waiting up to 5m0s for pod "pod-2a80b396-385d-4f0d-bb6a-b021cd5f1d4b" in namespace "emptydir-5672" to be "success or failure" Jan 27 13:14:44.793: INFO: Pod "pod-2a80b396-385d-4f0d-bb6a-b021cd5f1d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.961118ms Jan 27 13:14:46.805: INFO: Pod "pod-2a80b396-385d-4f0d-bb6a-b021cd5f1d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026940265s Jan 27 13:14:48.813: INFO: Pod "pod-2a80b396-385d-4f0d-bb6a-b021cd5f1d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034978712s Jan 27 13:14:50.830: INFO: Pod "pod-2a80b396-385d-4f0d-bb6a-b021cd5f1d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051695586s Jan 27 13:14:52.846: INFO: Pod "pod-2a80b396-385d-4f0d-bb6a-b021cd5f1d4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067354294s STEP: Saw pod success Jan 27 13:14:52.846: INFO: Pod "pod-2a80b396-385d-4f0d-bb6a-b021cd5f1d4b" satisfied condition "success or failure" Jan 27 13:14:52.864: INFO: Trying to get logs from node iruya-node pod pod-2a80b396-385d-4f0d-bb6a-b021cd5f1d4b container test-container: STEP: delete the pod Jan 27 13:14:53.017: INFO: Waiting for pod pod-2a80b396-385d-4f0d-bb6a-b021cd5f1d4b to disappear Jan 27 13:14:53.028: INFO: Pod pod-2a80b396-385d-4f0d-bb6a-b021cd5f1d4b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:14:53.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5672" for this suite. Jan 27 13:14:59.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:14:59.263: INFO: namespace emptydir-5672 deletion completed in 6.219542558s • [SLOW TEST:14.589 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:14:59.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0127 13:15:02.404773 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 27 13:15:02.404: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:15:02.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2213" for this suite. Jan 27 13:15:08.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:15:08.729: INFO: namespace gc-2213 deletion completed in 6.275163529s • [SLOW TEST:9.465 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:15:08.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 27 13:15:08.947: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 27 13:15:09.026: INFO: Number of nodes with available pods: 0 Jan 27 13:15:09.026: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 27 13:15:09.106: INFO: Number of nodes with available pods: 0 Jan 27 13:15:09.106: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:10.116: INFO: Number of nodes with available pods: 0 Jan 27 13:15:10.116: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:11.123: INFO: Number of nodes with available pods: 0 Jan 27 13:15:11.123: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:12.143: INFO: Number of nodes with available pods: 0 Jan 27 13:15:12.143: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:13.137: INFO: Number of nodes with available pods: 0 Jan 27 13:15:13.137: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:14.113: INFO: Number of nodes with available pods: 0 Jan 27 13:15:14.113: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:15.114: INFO: Number of nodes with available pods: 0 Jan 27 13:15:15.114: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:16.117: INFO: Number of nodes with available pods: 0 Jan 27 13:15:16.117: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:17.116: INFO: Number of nodes with available pods: 0 Jan 27 13:15:17.116: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:18.116: INFO: Number of nodes with available pods: 1 Jan 27 13:15:18.116: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 27 13:15:18.316: INFO: Number of nodes with available pods: 1 Jan 27 13:15:18.316: INFO: Number of running nodes: 0, number of available pods: 1 Jan 27 13:15:19.326: INFO: Number of nodes with available pods: 0 Jan 27 13:15:19.326: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 27 13:15:19.356: INFO: Number of nodes with available pods: 0 Jan 27 13:15:19.356: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:20.371: INFO: Number of nodes with available pods: 0 Jan 27 13:15:20.371: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:21.387: INFO: Number of nodes with available pods: 0 Jan 27 13:15:21.387: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:22.368: INFO: Number of nodes with available pods: 0 Jan 27 13:15:22.368: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:23.374: INFO: Number of nodes with available pods: 0 Jan 27 13:15:23.374: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:24.366: INFO: Number of nodes with available pods: 0 Jan 27 13:15:24.366: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:25.368: INFO: Number of nodes with available pods: 0 Jan 27 13:15:25.368: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:26.366: INFO: Number of nodes with available pods: 0 Jan 27 13:15:26.366: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:27.365: INFO: Number of nodes with available pods: 0 Jan 27 13:15:27.365: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:28.368: INFO: Number of nodes with available pods: 0 Jan 27 13:15:28.369: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:29.379: INFO: Number of nodes with available pods: 0 Jan 27 13:15:29.379: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:30.370: INFO: Number of nodes with available pods: 0 Jan 27 13:15:30.370: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:31.365: INFO: Number of nodes with available pods: 0 Jan 27 13:15:31.365: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:32.366: INFO: Number of nodes with available pods: 0 Jan 27 13:15:32.366: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:33.366: INFO: Number of nodes with available pods: 0 Jan 27 13:15:33.366: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:34.368: INFO: Number of nodes with available pods: 0 Jan 27 13:15:34.368: INFO: Node iruya-node is running more than one daemon pod Jan 27 13:15:35.368: INFO: Number of nodes with available pods: 1 Jan 27 13:15:35.368: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6166, will wait for the garbage collector to delete the pods Jan 27 13:15:35.450: INFO: Deleting DaemonSet.extensions daemon-set took: 14.431334ms Jan 27 13:15:35.751: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.351106ms Jan 27 13:15:46.656: INFO: Number of nodes with available pods: 0 Jan 27 13:15:46.656: INFO: Number of running nodes: 0, number of available pods: 0 Jan 27 13:15:46.660: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6166/daemonsets","resourceVersion":"22061630"},"items":null} Jan 27 13:15:46.662: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6166/pods","resourceVersion":"22061630"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:15:46.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6166" for this suite. Jan 27 13:15:52.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:15:52.834: INFO: namespace daemonsets-6166 deletion completed in 6.128542146s • [SLOW TEST:44.104 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:15:52.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-6a44027b-2cdc-4919-8874-8d1d5167ae6d STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-6a44027b-2cdc-4919-8874-8d1d5167ae6d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:16:05.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1098" for this suite. Jan 27 13:16:27.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:16:27.302: INFO: namespace projected-1098 deletion completed in 22.1395909s • [SLOW TEST:34.468 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:16:27.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-ee561d5c-5053-45b3-b876-9d3b30744b71 STEP: Creating a pod to test consume secrets Jan 27 13:16:27.437: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-11c43084-6789-4833-9e64-5a959606b3dd" in namespace "projected-966" to be "success or failure" Jan 27 13:16:27.443: INFO: Pod "pod-projected-secrets-11c43084-6789-4833-9e64-5a959606b3dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.296534ms Jan 27 13:16:29.452: INFO: Pod "pod-projected-secrets-11c43084-6789-4833-9e64-5a959606b3dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015434433s Jan 27 13:16:31.468: INFO: Pod "pod-projected-secrets-11c43084-6789-4833-9e64-5a959606b3dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03086477s Jan 27 13:16:33.509: INFO: Pod "pod-projected-secrets-11c43084-6789-4833-9e64-5a959606b3dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072472261s Jan 27 13:16:35.517: INFO: Pod "pod-projected-secrets-11c43084-6789-4833-9e64-5a959606b3dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080649811s Jan 27 13:16:37.534: INFO: Pod "pod-projected-secrets-11c43084-6789-4833-9e64-5a959606b3dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097705824s STEP: Saw pod success Jan 27 13:16:37.535: INFO: Pod "pod-projected-secrets-11c43084-6789-4833-9e64-5a959606b3dd" satisfied condition "success or failure" Jan 27 13:16:37.540: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-11c43084-6789-4833-9e64-5a959606b3dd container secret-volume-test: STEP: delete the pod Jan 27 13:16:37.590: INFO: Waiting for pod pod-projected-secrets-11c43084-6789-4833-9e64-5a959606b3dd to disappear Jan 27 13:16:37.600: INFO: Pod pod-projected-secrets-11c43084-6789-4833-9e64-5a959606b3dd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:16:37.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-966" for this suite. Jan 27 13:16:43.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:16:43.850: INFO: namespace projected-966 deletion completed in 6.241692572s • [SLOW TEST:16.548 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:16:43.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0127 13:16:54.575132 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 27 13:16:54.575: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:16:54.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3062" for this suite. Jan 27 13:17:00.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:17:00.863: INFO: namespace gc-3062 deletion completed in 6.275673187s • [SLOW TEST:17.011 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:17:00.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 27 13:17:00.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2681' Jan 27 13:17:03.032: INFO: stderr: "" Jan 27 13:17:03.032: INFO: stdout: "replicationcontroller/redis-master created\n" Jan 27 13:17:03.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2681' Jan 27 13:17:03.462: INFO: stderr: "" Jan 27 13:17:03.462: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jan 27 13:17:04.479: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:17:04.479: INFO: Found 0 / 1 Jan 27 13:17:05.474: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:17:05.475: INFO: Found 0 / 1 Jan 27 13:17:06.485: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:17:06.485: INFO: Found 0 / 1 Jan 27 13:17:07.476: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:17:07.476: INFO: Found 0 / 1 Jan 27 13:17:08.475: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:17:08.475: INFO: Found 0 / 1 Jan 27 13:17:09.479: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:17:09.479: INFO: Found 0 / 1 Jan 27 13:17:10.479: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:17:10.479: INFO: Found 0 / 1 Jan 27 13:17:11.472: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:17:11.472: INFO: Found 1 / 1 Jan 27 13:17:11.472: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 27 13:17:11.478: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:17:11.478: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 27 13:17:11.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-s7w8m --namespace=kubectl-2681' Jan 27 13:17:11.663: INFO: stderr: "" Jan 27 13:17:11.664: INFO: stdout: "Name: redis-master-s7w8m\nNamespace: kubectl-2681\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Mon, 27 Jan 2020 13:17:03 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://56d983008bfe7e4200d74d91d8f11e32f153810fb3881fada0898839c3e1fd24\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 27 Jan 2020 13:17:09 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-fz7f7 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-fz7f7:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-fz7f7\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s default-scheduler Successfully assigned kubectl-2681/redis-master-s7w8m to iruya-node\n Normal Pulled 4s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-node Created container redis-master\n Normal Started 1s kubelet, iruya-node Started container redis-master\n" Jan 27 13:17:11.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2681' Jan 27 13:17:11.822: INFO: stderr: "" Jan 27 13:17:11.822: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2681\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: redis-master-s7w8m\n" Jan 27 13:17:11.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2681' Jan 27 13:17:11.955: INFO: stderr: "" Jan 27 13:17:11.955: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2681\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.111.130.185\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Jan 27 13:17:11.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Jan 27 13:17:12.118: INFO: stderr: "" Jan 27 13:17:12.118: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Mon, 27 Jan 2020 13:16:59 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 27 Jan 2020 13:16:59 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 27 Jan 2020 13:16:59 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 27 Jan 2020 13:16:59 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 176d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 107d\n kubectl-2681 redis-master-s7w8m 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 27 13:17:12.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2681' Jan 27 13:17:12.231: INFO: stderr: "" Jan 27 13:17:12.231: INFO: stdout: "Name: kubectl-2681\nLabels: e2e-framework=kubectl\n e2e-run=3ff3f3ac-6df1-4bf3-bdbb-9eab3737a556\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:17:12.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2681" for this suite. Jan 27 13:17:34.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:17:34.386: INFO: namespace kubectl-2681 deletion completed in 22.148046792s • [SLOW TEST:33.522 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:17:34.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 27 13:17:34.490: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:17:48.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9363" for this suite. Jan 27 13:17:55.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:17:55.145: INFO: namespace init-container-9363 deletion completed in 6.141283375s • [SLOW TEST:20.759 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:17:55.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 27 13:17:55.214: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:18:16.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-262" for this suite. Jan 27 13:18:22.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:18:22.729: INFO: namespace pods-262 deletion completed in 6.122331073s • [SLOW TEST:27.583 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:18:22.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-3860 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3860 STEP: Deleting pre-stop pod Jan 27 13:18:46.118: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:18:46.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3860" for this suite. Jan 27 13:19:28.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:19:28.336: INFO: namespace prestop-3860 deletion completed in 42.165179904s • [SLOW TEST:65.608 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:19:28.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4026 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jan 27 13:19:28.500: INFO: Found 0 stateful pods, waiting for 3 Jan 27 13:19:38.667: INFO: Found 2 stateful pods, waiting for 3 Jan 27 13:19:48.517: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:19:48.517: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:19:48.517: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 27 13:19:58.517: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:19:58.517: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:19:58.518: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 27 13:19:58.563: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 27 13:20:08.744: INFO: Updating stateful set ss2 Jan 27 13:20:08.796: INFO: Waiting for Pod statefulset-4026/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 27 13:20:19.111: INFO: Found 2 stateful pods, waiting for 3 Jan 27 13:20:29.120: INFO: Found 2 stateful pods, waiting for 3 Jan 27 13:20:39.125: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:20:39.125: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:20:39.125: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 27 13:20:49.121: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:20:49.121: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:20:49.121: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 27 13:20:49.163: INFO: Updating stateful set ss2 Jan 27 13:20:49.268: INFO: Waiting for Pod statefulset-4026/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 13:20:59.689: INFO: Updating stateful set ss2 Jan 27 13:20:59.894: INFO: Waiting for StatefulSet statefulset-4026/ss2 to complete update Jan 27 13:20:59.894: INFO: Waiting for Pod statefulset-4026/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 13:21:09.912: INFO: Waiting for StatefulSet statefulset-4026/ss2 to complete update Jan 27 13:21:09.912: INFO: Waiting for Pod statefulset-4026/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 13:21:19.907: INFO: Waiting for StatefulSet statefulset-4026/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 27 13:21:29.917: INFO: Deleting all statefulset in ns statefulset-4026 Jan 27 13:21:29.922: INFO: Scaling statefulset ss2 to 0 Jan 27 13:22:10.301: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 13:22:10.307: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:22:10.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4026" for this suite. Jan 27 13:22:18.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:22:18.774: INFO: namespace statefulset-4026 deletion completed in 8.409595312s • [SLOW TEST:170.437 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:22:18.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-f7b513aa-c3a0-47c0-97d4-448801b07970 in namespace container-probe-4385 Jan 27 13:22:29.002: INFO: Started pod busybox-f7b513aa-c3a0-47c0-97d4-448801b07970 in namespace container-probe-4385 STEP: checking the pod's current state and verifying that restartCount is present Jan 27 13:22:29.010: INFO: Initial restart count of pod busybox-f7b513aa-c3a0-47c0-97d4-448801b07970 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:26:30.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4385" for this suite. Jan 27 13:26:37.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:26:37.220: INFO: namespace container-probe-4385 deletion completed in 6.214235478s • [SLOW TEST:258.445 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:26:37.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-9de6044f-7d36-491e-b084-f2376052b611 STEP: Creating a pod to test consume configMaps Jan 27 13:26:37.292: INFO: Waiting up to 5m0s for pod "pod-configmaps-f7cf304e-d88d-4a92-9365-3db062f1a479" in namespace "configmap-5814" to be "success or failure" Jan 27 13:26:37.299: INFO: Pod "pod-configmaps-f7cf304e-d88d-4a92-9365-3db062f1a479": Phase="Pending", Reason="", readiness=false. Elapsed: 7.691901ms Jan 27 13:26:39.327: INFO: Pod "pod-configmaps-f7cf304e-d88d-4a92-9365-3db062f1a479": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035111918s Jan 27 13:26:41.425: INFO: Pod "pod-configmaps-f7cf304e-d88d-4a92-9365-3db062f1a479": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133133422s Jan 27 13:26:43.433: INFO: Pod "pod-configmaps-f7cf304e-d88d-4a92-9365-3db062f1a479": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141442967s Jan 27 13:26:45.442: INFO: Pod "pod-configmaps-f7cf304e-d88d-4a92-9365-3db062f1a479": Phase="Running", Reason="", readiness=true. Elapsed: 8.150171104s Jan 27 13:26:47.450: INFO: Pod "pod-configmaps-f7cf304e-d88d-4a92-9365-3db062f1a479": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.158112559s STEP: Saw pod success Jan 27 13:26:47.450: INFO: Pod "pod-configmaps-f7cf304e-d88d-4a92-9365-3db062f1a479" satisfied condition "success or failure" Jan 27 13:26:47.454: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f7cf304e-d88d-4a92-9365-3db062f1a479 container configmap-volume-test: STEP: delete the pod Jan 27 13:26:47.526: INFO: Waiting for pod pod-configmaps-f7cf304e-d88d-4a92-9365-3db062f1a479 to disappear Jan 27 13:26:47.536: INFO: Pod pod-configmaps-f7cf304e-d88d-4a92-9365-3db062f1a479 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:26:47.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5814" for this suite. Jan 27 13:26:53.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:26:53.744: INFO: namespace configmap-5814 deletion completed in 6.200387397s • [SLOW TEST:16.523 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:26:53.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 27 13:27:12.228: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 27 13:27:12.239: INFO: Pod pod-with-poststart-http-hook still exists Jan 27 13:27:14.239: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 27 13:27:14.251: INFO: Pod pod-with-poststart-http-hook still exists Jan 27 13:27:16.239: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 27 13:27:16.249: INFO: Pod pod-with-poststart-http-hook still exists Jan 27 13:27:18.239: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 27 13:27:18.248: INFO: Pod pod-with-poststart-http-hook still exists Jan 27 13:27:20.239: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 27 13:27:20.250: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:27:20.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6018" for this suite. Jan 27 13:27:42.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:27:42.351: INFO: namespace container-lifecycle-hook-6018 deletion completed in 22.095355815s • [SLOW TEST:48.607 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:27:42.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 27 13:27:42.507: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6903,SelfLink:/api/v1/namespaces/watch-6903/configmaps/e2e-watch-test-watch-closed,UID:e84c9335-0b1d-4928-974e-c9b2af335b20,ResourceVersion:22063187,Generation:0,CreationTimestamp:2020-01-27 13:27:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 27 13:27:42.507: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6903,SelfLink:/api/v1/namespaces/watch-6903/configmaps/e2e-watch-test-watch-closed,UID:e84c9335-0b1d-4928-974e-c9b2af335b20,ResourceVersion:22063188,Generation:0,CreationTimestamp:2020-01-27 13:27:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 27 13:27:42.823: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6903,SelfLink:/api/v1/namespaces/watch-6903/configmaps/e2e-watch-test-watch-closed,UID:e84c9335-0b1d-4928-974e-c9b2af335b20,ResourceVersion:22063189,Generation:0,CreationTimestamp:2020-01-27 13:27:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 27 13:27:42.824: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6903,SelfLink:/api/v1/namespaces/watch-6903/configmaps/e2e-watch-test-watch-closed,UID:e84c9335-0b1d-4928-974e-c9b2af335b20,ResourceVersion:22063190,Generation:0,CreationTimestamp:2020-01-27 13:27:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:27:42.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6903" for this suite. Jan 27 13:27:48.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:27:49.052: INFO: namespace watch-6903 deletion completed in 6.205089021s • [SLOW TEST:6.701 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:27:49.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-d6aebe0a-5930-4073-8a19-4b988d743f93 STEP: Creating a pod to test consume configMaps Jan 27 13:27:49.319: INFO: Waiting up to 5m0s for pod "pod-configmaps-a5f9078b-6248-47c7-a741-24d98021a459" in namespace "configmap-5828" to be "success or failure" Jan 27 13:27:49.332: INFO: Pod "pod-configmaps-a5f9078b-6248-47c7-a741-24d98021a459": Phase="Pending", Reason="", readiness=false. Elapsed: 12.377699ms Jan 27 13:27:51.345: INFO: Pod "pod-configmaps-a5f9078b-6248-47c7-a741-24d98021a459": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025555618s Jan 27 13:27:53.360: INFO: Pod "pod-configmaps-a5f9078b-6248-47c7-a741-24d98021a459": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040245071s Jan 27 13:27:55.372: INFO: Pod "pod-configmaps-a5f9078b-6248-47c7-a741-24d98021a459": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053003013s Jan 27 13:27:57.383: INFO: Pod "pod-configmaps-a5f9078b-6248-47c7-a741-24d98021a459": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063479255s Jan 27 13:27:59.388: INFO: Pod "pod-configmaps-a5f9078b-6248-47c7-a741-24d98021a459": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068706333s STEP: Saw pod success Jan 27 13:27:59.388: INFO: Pod "pod-configmaps-a5f9078b-6248-47c7-a741-24d98021a459" satisfied condition "success or failure" Jan 27 13:27:59.392: INFO: Trying to get logs from node iruya-node pod pod-configmaps-a5f9078b-6248-47c7-a741-24d98021a459 container configmap-volume-test: STEP: delete the pod Jan 27 13:27:59.515: INFO: Waiting for pod pod-configmaps-a5f9078b-6248-47c7-a741-24d98021a459 to disappear Jan 27 13:27:59.537: INFO: Pod pod-configmaps-a5f9078b-6248-47c7-a741-24d98021a459 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:27:59.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5828" for this suite. Jan 27 13:28:05.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:28:05.723: INFO: namespace configmap-5828 deletion completed in 6.179835549s • [SLOW TEST:16.669 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:28:05.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0127 13:28:35.983022 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 27 13:28:35.983: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:28:35.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8019" for this suite. Jan 27 13:28:44.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:28:44.140: INFO: namespace gc-8019 deletion completed in 8.149086312s • [SLOW TEST:38.417 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:28:44.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:28:45.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9874" for this suite. Jan 27 13:28:51.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:28:51.859: INFO: namespace services-9874 deletion completed in 6.239863813s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:7.719 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:28:51.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 27 13:28:52.045: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afd7d8b7-3930-4be9-a7ed-99d0bd283711" in namespace "projected-4684" to be "success or failure" Jan 27 13:28:52.076: INFO: Pod "downwardapi-volume-afd7d8b7-3930-4be9-a7ed-99d0bd283711": Phase="Pending", Reason="", readiness=false. Elapsed: 31.163045ms Jan 27 13:28:54.087: INFO: Pod "downwardapi-volume-afd7d8b7-3930-4be9-a7ed-99d0bd283711": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041851416s Jan 27 13:28:56.095: INFO: Pod "downwardapi-volume-afd7d8b7-3930-4be9-a7ed-99d0bd283711": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050601954s Jan 27 13:28:58.109: INFO: Pod "downwardapi-volume-afd7d8b7-3930-4be9-a7ed-99d0bd283711": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063867982s Jan 27 13:29:00.120: INFO: Pod "downwardapi-volume-afd7d8b7-3930-4be9-a7ed-99d0bd283711": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074721157s Jan 27 13:29:02.141: INFO: Pod "downwardapi-volume-afd7d8b7-3930-4be9-a7ed-99d0bd283711": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095870424s STEP: Saw pod success Jan 27 13:29:02.141: INFO: Pod "downwardapi-volume-afd7d8b7-3930-4be9-a7ed-99d0bd283711" satisfied condition "success or failure" Jan 27 13:29:02.178: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-afd7d8b7-3930-4be9-a7ed-99d0bd283711 container client-container: STEP: delete the pod Jan 27 13:29:02.271: INFO: Waiting for pod downwardapi-volume-afd7d8b7-3930-4be9-a7ed-99d0bd283711 to disappear Jan 27 13:29:02.296: INFO: Pod downwardapi-volume-afd7d8b7-3930-4be9-a7ed-99d0bd283711 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:29:02.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4684" for this suite. Jan 27 13:29:08.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:29:08.478: INFO: namespace projected-4684 deletion completed in 6.176064005s • [SLOW TEST:16.618 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:29:08.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 27 13:29:08.582: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27686706-ba45-4b42-a04f-7259cc7b27a0" in namespace "projected-4523" to be "success or failure" Jan 27 13:29:08.698: INFO: Pod "downwardapi-volume-27686706-ba45-4b42-a04f-7259cc7b27a0": Phase="Pending", Reason="", readiness=false. Elapsed: 115.649524ms Jan 27 13:29:10.709: INFO: Pod "downwardapi-volume-27686706-ba45-4b42-a04f-7259cc7b27a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126651381s Jan 27 13:29:12.717: INFO: Pod "downwardapi-volume-27686706-ba45-4b42-a04f-7259cc7b27a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134926796s Jan 27 13:29:14.729: INFO: Pod "downwardapi-volume-27686706-ba45-4b42-a04f-7259cc7b27a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147202718s Jan 27 13:29:16.736: INFO: Pod "downwardapi-volume-27686706-ba45-4b42-a04f-7259cc7b27a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.154178377s Jan 27 13:29:18.748: INFO: Pod "downwardapi-volume-27686706-ba45-4b42-a04f-7259cc7b27a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.165424048s STEP: Saw pod success Jan 27 13:29:18.748: INFO: Pod "downwardapi-volume-27686706-ba45-4b42-a04f-7259cc7b27a0" satisfied condition "success or failure" Jan 27 13:29:18.754: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-27686706-ba45-4b42-a04f-7259cc7b27a0 container client-container: STEP: delete the pod Jan 27 13:29:18.969: INFO: Waiting for pod downwardapi-volume-27686706-ba45-4b42-a04f-7259cc7b27a0 to disappear Jan 27 13:29:18.975: INFO: Pod downwardapi-volume-27686706-ba45-4b42-a04f-7259cc7b27a0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:29:18.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4523" for this suite. Jan 27 13:29:25.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:29:25.217: INFO: namespace projected-4523 deletion completed in 6.235902327s • [SLOW TEST:16.738 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:29:25.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-e9dc8def-bb95-4d31-8774-ebb5d2aced9a STEP: Creating a pod to test consume configMaps Jan 27 13:29:25.345: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-767dfa64-c82a-4839-bc5b-430a5eae7c97" in namespace "projected-6587" to be "success or failure" Jan 27 13:29:25.367: INFO: Pod "pod-projected-configmaps-767dfa64-c82a-4839-bc5b-430a5eae7c97": Phase="Pending", Reason="", readiness=false. Elapsed: 22.571273ms Jan 27 13:29:27.376: INFO: Pod "pod-projected-configmaps-767dfa64-c82a-4839-bc5b-430a5eae7c97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031051921s Jan 27 13:29:29.383: INFO: Pod "pod-projected-configmaps-767dfa64-c82a-4839-bc5b-430a5eae7c97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038239502s Jan 27 13:29:31.390: INFO: Pod "pod-projected-configmaps-767dfa64-c82a-4839-bc5b-430a5eae7c97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045433439s Jan 27 13:29:33.407: INFO: Pod "pod-projected-configmaps-767dfa64-c82a-4839-bc5b-430a5eae7c97": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062319424s Jan 27 13:29:35.424: INFO: Pod "pod-projected-configmaps-767dfa64-c82a-4839-bc5b-430a5eae7c97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079797772s STEP: Saw pod success Jan 27 13:29:35.425: INFO: Pod "pod-projected-configmaps-767dfa64-c82a-4839-bc5b-430a5eae7c97" satisfied condition "success or failure" Jan 27 13:29:35.431: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-767dfa64-c82a-4839-bc5b-430a5eae7c97 container projected-configmap-volume-test: STEP: delete the pod Jan 27 13:29:35.633: INFO: Waiting for pod pod-projected-configmaps-767dfa64-c82a-4839-bc5b-430a5eae7c97 to disappear Jan 27 13:29:35.644: INFO: Pod pod-projected-configmaps-767dfa64-c82a-4839-bc5b-430a5eae7c97 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:29:35.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6587" for this suite. Jan 27 13:29:41.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:29:41.852: INFO: namespace projected-6587 deletion completed in 6.200693117s • [SLOW TEST:16.635 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:29:41.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:29:52.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8216" for this suite. Jan 27 13:29:58.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:29:58.358: INFO: namespace emptydir-wrapper-8216 deletion completed in 6.211846934s • [SLOW TEST:16.505 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:29:58.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 27 13:30:07.881: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:30:08.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6933" for this suite. Jan 27 13:30:14.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:30:14.211: INFO: namespace container-runtime-6933 deletion completed in 6.184623872s • [SLOW TEST:15.852 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:30:14.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-cdbb0780-2c2d-4f70-8777-72ada9de57e5 STEP: Creating a pod to test consume configMaps Jan 27 13:30:14.362: INFO: Waiting up to 5m0s for pod "pod-configmaps-19cbfc50-8e9b-40ca-8c53-060b4c1806e8" in namespace "configmap-4809" to be "success or failure" Jan 27 13:30:14.377: INFO: Pod "pod-configmaps-19cbfc50-8e9b-40ca-8c53-060b4c1806e8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.908221ms Jan 27 13:30:16.390: INFO: Pod "pod-configmaps-19cbfc50-8e9b-40ca-8c53-060b4c1806e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028316126s Jan 27 13:30:18.402: INFO: Pod "pod-configmaps-19cbfc50-8e9b-40ca-8c53-060b4c1806e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03988053s Jan 27 13:30:20.413: INFO: Pod "pod-configmaps-19cbfc50-8e9b-40ca-8c53-060b4c1806e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051119358s Jan 27 13:30:22.433: INFO: Pod "pod-configmaps-19cbfc50-8e9b-40ca-8c53-060b4c1806e8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071205766s Jan 27 13:30:24.461: INFO: Pod "pod-configmaps-19cbfc50-8e9b-40ca-8c53-060b4c1806e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09945818s STEP: Saw pod success Jan 27 13:30:24.461: INFO: Pod "pod-configmaps-19cbfc50-8e9b-40ca-8c53-060b4c1806e8" satisfied condition "success or failure" Jan 27 13:30:24.467: INFO: Trying to get logs from node iruya-node pod pod-configmaps-19cbfc50-8e9b-40ca-8c53-060b4c1806e8 container configmap-volume-test: STEP: delete the pod Jan 27 13:30:24.909: INFO: Waiting for pod pod-configmaps-19cbfc50-8e9b-40ca-8c53-060b4c1806e8 to disappear Jan 27 13:30:24.929: INFO: Pod pod-configmaps-19cbfc50-8e9b-40ca-8c53-060b4c1806e8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:30:24.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4809" for this suite. Jan 27 13:30:30.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:30:31.156: INFO: namespace configmap-4809 deletion completed in 6.190520484s • [SLOW TEST:16.945 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:30:31.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 27 13:30:31.320: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 27 13:30:31.332: INFO: Waiting for terminating namespaces to be deleted... Jan 27 13:30:31.336: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 27 13:30:31.351: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 27 13:30:31.351: INFO: Container kube-proxy ready: true, restart count 0 Jan 27 13:30:31.351: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 27 13:30:31.351: INFO: Container weave ready: true, restart count 0 Jan 27 13:30:31.351: INFO: Container weave-npc ready: true, restart count 0 Jan 27 13:30:31.351: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 27 13:30:31.441: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 27 13:30:31.441: INFO: Container coredns ready: true, restart count 0 Jan 27 13:30:31.441: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 27 13:30:31.441: INFO: Container etcd ready: true, restart count 0 Jan 27 13:30:31.441: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 27 13:30:31.441: INFO: Container weave ready: true, restart count 0 Jan 27 13:30:31.441: INFO: Container weave-npc ready: true, restart count 0 Jan 27 13:30:31.441: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 27 13:30:31.441: INFO: Container kube-controller-manager ready: true, restart count 19 Jan 27 13:30:31.441: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 27 13:30:31.441: INFO: Container kube-proxy ready: true, restart count 0 Jan 27 13:30:31.441: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 27 13:30:31.441: INFO: Container kube-apiserver ready: true, restart count 0 Jan 27 13:30:31.441: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 27 13:30:31.441: INFO: Container kube-scheduler ready: true, restart count 13 Jan 27 13:30:31.441: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 27 13:30:31.441: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Jan 27 13:30:31.610: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 27 13:30:31.610: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 27 13:30:31.610: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Jan 27 13:30:31.610: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Jan 27 13:30:31.610: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Jan 27 13:30:31.610: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Jan 27 13:30:31.610: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Jan 27 13:30:31.610: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 27 13:30:31.610: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Jan 27 13:30:31.610: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-2036f121-9d10-4fa7-a9c8-c8a92303fa43.15edc1876e127099], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3388/filler-pod-2036f121-9d10-4fa7-a9c8-c8a92303fa43 to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-2036f121-9d10-4fa7-a9c8-c8a92303fa43.15edc188b06fede6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-2036f121-9d10-4fa7-a9c8-c8a92303fa43.15edc189646d3cd6], Reason = [Created], Message = [Created container filler-pod-2036f121-9d10-4fa7-a9c8-c8a92303fa43] STEP: Considering event: Type = [Normal], Name = [filler-pod-2036f121-9d10-4fa7-a9c8-c8a92303fa43.15edc1898a74fe81], Reason = [Started], Message = [Started container filler-pod-2036f121-9d10-4fa7-a9c8-c8a92303fa43] STEP: Considering event: Type = [Normal], Name = [filler-pod-baec8033-d218-447c-9552-86c8c720d1de.15edc1876b202f76], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3388/filler-pod-baec8033-d218-447c-9552-86c8c720d1de to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-baec8033-d218-447c-9552-86c8c720d1de.15edc188962e6ab8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-baec8033-d218-447c-9552-86c8c720d1de.15edc1897b7e5a04], Reason = [Created], Message = [Created container filler-pod-baec8033-d218-447c-9552-86c8c720d1de] STEP: Considering event: Type = [Normal], Name = [filler-pod-baec8033-d218-447c-9552-86c8c720d1de.15edc1899afe7f16], Reason = [Started], Message = [Started container filler-pod-baec8033-d218-447c-9552-86c8c720d1de] STEP: Considering event: Type = [Warning], Name = [additional-pod.15edc189c40e6c4e], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:30:42.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3388" for this suite. Jan 27 13:30:51.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:30:51.658: INFO: namespace sched-pred-3388 deletion completed in 8.684028023s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:20.502 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:30:51.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-91fe2ecb-fedf-4f72-bb26-5353bc294854 in namespace container-probe-3038 Jan 27 13:31:01.843: INFO: Started pod liveness-91fe2ecb-fedf-4f72-bb26-5353bc294854 in namespace container-probe-3038 STEP: checking the pod's current state and verifying that restartCount is present Jan 27 13:31:01.850: INFO: Initial restart count of pod liveness-91fe2ecb-fedf-4f72-bb26-5353bc294854 is 0 Jan 27 13:31:17.954: INFO: Restart count of pod container-probe-3038/liveness-91fe2ecb-fedf-4f72-bb26-5353bc294854 is now 1 (16.104749095s elapsed) Jan 27 13:31:38.059: INFO: Restart count of pod container-probe-3038/liveness-91fe2ecb-fedf-4f72-bb26-5353bc294854 is now 2 (36.209035247s elapsed) Jan 27 13:31:58.237: INFO: Restart count of pod container-probe-3038/liveness-91fe2ecb-fedf-4f72-bb26-5353bc294854 is now 3 (56.38730926s elapsed) Jan 27 13:32:20.410: INFO: Restart count of pod container-probe-3038/liveness-91fe2ecb-fedf-4f72-bb26-5353bc294854 is now 4 (1m18.560581364s elapsed) Jan 27 13:33:27.007: INFO: Restart count of pod container-probe-3038/liveness-91fe2ecb-fedf-4f72-bb26-5353bc294854 is now 5 (2m25.157200217s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:33:27.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3038" for this suite. Jan 27 13:33:33.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:33:33.367: INFO: namespace container-probe-3038 deletion completed in 6.286264611s • [SLOW TEST:161.709 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:33:33.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 27 13:33:42.666: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:33:42.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7368" for this suite. Jan 27 13:33:48.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:33:49.019: INFO: namespace container-runtime-7368 deletion completed in 6.226200169s • [SLOW TEST:15.652 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:33:49.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 27 13:34:19.184: INFO: Container started at 2020-01-27 13:33:55 +0000 UTC, pod became ready at 2020-01-27 13:34:17 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:34:19.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3137" for this suite. Jan 27 13:34:41.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:34:41.346: INFO: namespace container-probe-3137 deletion completed in 22.156187159s • [SLOW TEST:52.326 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:34:41.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3395 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jan 27 13:34:41.577: INFO: Found 0 stateful pods, waiting for 3 Jan 27 13:34:51.587: INFO: Found 2 stateful pods, waiting for 3 Jan 27 13:35:01.602: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:35:01.602: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:35:01.602: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 27 13:35:11.597: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:35:11.597: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:35:11.597: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:35:11.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3395 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 13:35:13.984: INFO: stderr: "I0127 13:35:13.583839 901 log.go:172] (0xc000146790) (0xc000500820) Create stream\nI0127 13:35:13.584427 901 log.go:172] (0xc000146790) (0xc000500820) Stream added, broadcasting: 1\nI0127 13:35:13.590530 901 log.go:172] (0xc000146790) Reply frame received for 1\nI0127 13:35:13.590683 901 log.go:172] (0xc000146790) (0xc0006da0a0) Create stream\nI0127 13:35:13.590699 901 log.go:172] (0xc000146790) (0xc0006da0a0) Stream added, broadcasting: 3\nI0127 13:35:13.592889 901 log.go:172] (0xc000146790) Reply frame received for 3\nI0127 13:35:13.592932 901 log.go:172] (0xc000146790) (0xc000312000) Create stream\nI0127 13:35:13.592942 901 log.go:172] (0xc000146790) (0xc000312000) Stream added, broadcasting: 5\nI0127 13:35:13.595649 901 log.go:172] (0xc000146790) Reply frame received for 5\nI0127 13:35:13.716360 901 log.go:172] (0xc000146790) Data frame received for 5\nI0127 13:35:13.716500 901 log.go:172] (0xc000312000) (5) Data frame handling\nI0127 13:35:13.716548 901 log.go:172] (0xc000312000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0127 13:35:13.823494 901 log.go:172] (0xc000146790) Data frame received for 3\nI0127 13:35:13.823777 901 log.go:172] (0xc0006da0a0) (3) Data frame handling\nI0127 13:35:13.823901 901 log.go:172] (0xc0006da0a0) (3) Data frame sent\nI0127 13:35:13.972783 901 log.go:172] (0xc000146790) (0xc0006da0a0) Stream removed, broadcasting: 3\nI0127 13:35:13.973025 901 log.go:172] (0xc000146790) Data frame received for 1\nI0127 13:35:13.973071 901 log.go:172] (0xc000146790) (0xc000312000) Stream removed, broadcasting: 5\nI0127 13:35:13.973088 901 log.go:172] (0xc000500820) (1) Data frame handling\nI0127 13:35:13.973114 901 log.go:172] (0xc000500820) (1) Data frame sent\nI0127 13:35:13.973124 901 log.go:172] (0xc000146790) (0xc000500820) Stream removed, broadcasting: 1\nI0127 13:35:13.973136 901 log.go:172] (0xc000146790) Go away received\nI0127 13:35:13.974097 901 log.go:172] (0xc000146790) (0xc000500820) Stream removed, broadcasting: 1\nI0127 13:35:13.974134 901 log.go:172] (0xc000146790) (0xc0006da0a0) Stream removed, broadcasting: 3\nI0127 13:35:13.974145 901 log.go:172] (0xc000146790) (0xc000312000) Stream removed, broadcasting: 5\n" Jan 27 13:35:13.985: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 13:35:13.985: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 27 13:35:24.046: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 27 13:35:34.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3395 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:35:34.594: INFO: stderr: "I0127 13:35:34.329758 927 log.go:172] (0xc00084a0b0) (0xc0006826e0) Create stream\nI0127 13:35:34.329929 927 log.go:172] (0xc00084a0b0) (0xc0006826e0) Stream added, broadcasting: 1\nI0127 13:35:34.332255 927 log.go:172] (0xc00084a0b0) Reply frame received for 1\nI0127 13:35:34.332288 927 log.go:172] (0xc00084a0b0) (0xc0006321e0) Create stream\nI0127 13:35:34.332294 927 log.go:172] (0xc00084a0b0) (0xc0006321e0) Stream added, broadcasting: 3\nI0127 13:35:34.333489 927 log.go:172] (0xc00084a0b0) Reply frame received for 3\nI0127 13:35:34.333511 927 log.go:172] (0xc00084a0b0) (0xc000682780) Create stream\nI0127 13:35:34.333517 927 log.go:172] (0xc00084a0b0) (0xc000682780) Stream added, broadcasting: 5\nI0127 13:35:34.335468 927 log.go:172] (0xc00084a0b0) Reply frame received for 5\nI0127 13:35:34.426064 927 log.go:172] (0xc00084a0b0) Data frame received for 3\nI0127 13:35:34.426430 927 log.go:172] (0xc0006321e0) (3) Data frame handling\nI0127 13:35:34.426456 927 log.go:172] (0xc0006321e0) (3) Data frame sent\nI0127 13:35:34.426500 927 log.go:172] (0xc00084a0b0) Data frame received for 5\nI0127 13:35:34.426512 927 log.go:172] (0xc000682780) (5) Data frame handling\nI0127 13:35:34.426521 927 log.go:172] (0xc000682780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0127 13:35:34.580814 927 log.go:172] (0xc00084a0b0) Data frame received for 1\nI0127 13:35:34.581343 927 log.go:172] (0xc0006826e0) (1) Data frame handling\nI0127 13:35:34.581495 927 log.go:172] (0xc0006826e0) (1) Data frame sent\nI0127 13:35:34.582276 927 log.go:172] (0xc00084a0b0) (0xc0006826e0) Stream removed, broadcasting: 1\nI0127 13:35:34.583323 927 log.go:172] (0xc00084a0b0) (0xc0006321e0) Stream removed, broadcasting: 3\nI0127 13:35:34.583805 927 log.go:172] (0xc00084a0b0) (0xc000682780) Stream removed, broadcasting: 5\nI0127 13:35:34.583929 927 log.go:172] (0xc00084a0b0) Go away received\nI0127 13:35:34.584158 927 log.go:172] (0xc00084a0b0) (0xc0006826e0) Stream removed, broadcasting: 1\nI0127 13:35:34.584231 927 log.go:172] (0xc00084a0b0) (0xc0006321e0) Stream removed, broadcasting: 3\nI0127 13:35:34.584298 927 log.go:172] (0xc00084a0b0) (0xc000682780) Stream removed, broadcasting: 5\n" Jan 27 13:35:34.594: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 27 13:35:34.594: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 27 13:35:44.682: INFO: Waiting for StatefulSet statefulset-3395/ss2 to complete update Jan 27 13:35:44.682: INFO: Waiting for Pod statefulset-3395/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 13:35:44.682: INFO: Waiting for Pod statefulset-3395/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 13:35:54.694: INFO: Waiting for StatefulSet statefulset-3395/ss2 to complete update Jan 27 13:35:54.694: INFO: Waiting for Pod statefulset-3395/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 13:35:54.694: INFO: Waiting for Pod statefulset-3395/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 13:36:04.710: INFO: Waiting for StatefulSet statefulset-3395/ss2 to complete update Jan 27 13:36:04.711: INFO: Waiting for Pod statefulset-3395/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 13:36:14.761: INFO: Waiting for StatefulSet statefulset-3395/ss2 to complete update STEP: Rolling back to a previous revision Jan 27 13:36:24.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3395 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 13:36:25.246: INFO: stderr: "I0127 13:36:24.961059 942 log.go:172] (0xc00013adc0) (0xc0005c6820) Create stream\nI0127 13:36:24.961243 942 log.go:172] (0xc00013adc0) (0xc0005c6820) Stream added, broadcasting: 1\nI0127 13:36:24.964517 942 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0127 13:36:24.964562 942 log.go:172] (0xc00013adc0) (0xc0007de000) Create stream\nI0127 13:36:24.964574 942 log.go:172] (0xc00013adc0) (0xc0007de000) Stream added, broadcasting: 3\nI0127 13:36:24.965913 942 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0127 13:36:24.965946 942 log.go:172] (0xc00013adc0) (0xc0004bc000) Create stream\nI0127 13:36:24.965959 942 log.go:172] (0xc00013adc0) (0xc0004bc000) Stream added, broadcasting: 5\nI0127 13:36:24.967044 942 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0127 13:36:25.062592 942 log.go:172] (0xc00013adc0) Data frame received for 5\nI0127 13:36:25.062662 942 log.go:172] (0xc0004bc000) (5) Data frame handling\nI0127 13:36:25.062684 942 log.go:172] (0xc0004bc000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0127 13:36:25.126327 942 log.go:172] (0xc00013adc0) Data frame received for 3\nI0127 13:36:25.126378 942 log.go:172] (0xc0007de000) (3) Data frame handling\nI0127 13:36:25.126393 942 log.go:172] (0xc0007de000) (3) Data frame sent\nI0127 13:36:25.237568 942 log.go:172] (0xc00013adc0) Data frame received for 1\nI0127 13:36:25.237658 942 log.go:172] (0xc00013adc0) (0xc0004bc000) Stream removed, broadcasting: 5\nI0127 13:36:25.237720 942 log.go:172] (0xc0005c6820) (1) Data frame handling\nI0127 13:36:25.237806 942 log.go:172] (0xc00013adc0) (0xc0007de000) Stream removed, broadcasting: 3\nI0127 13:36:25.237854 942 log.go:172] (0xc0005c6820) (1) Data frame sent\nI0127 13:36:25.237877 942 log.go:172] (0xc00013adc0) (0xc0005c6820) Stream removed, broadcasting: 1\nI0127 13:36:25.237905 942 log.go:172] (0xc00013adc0) Go away received\nI0127 13:36:25.239305 942 log.go:172] (0xc00013adc0) (0xc0005c6820) Stream removed, broadcasting: 1\nI0127 13:36:25.239422 942 log.go:172] (0xc00013adc0) (0xc0007de000) Stream removed, broadcasting: 3\nI0127 13:36:25.239432 942 log.go:172] (0xc00013adc0) (0xc0004bc000) Stream removed, broadcasting: 5\n" Jan 27 13:36:25.247: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 13:36:25.247: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 27 13:36:35.311: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 27 13:36:45.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3395 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:36:45.755: INFO: stderr: "I0127 13:36:45.556146 959 log.go:172] (0xc00098a0b0) (0xc0007205a0) Create stream\nI0127 13:36:45.556339 959 log.go:172] (0xc00098a0b0) (0xc0007205a0) Stream added, broadcasting: 1\nI0127 13:36:45.560337 959 log.go:172] (0xc00098a0b0) Reply frame received for 1\nI0127 13:36:45.560387 959 log.go:172] (0xc00098a0b0) (0xc0009e2000) Create stream\nI0127 13:36:45.560397 959 log.go:172] (0xc00098a0b0) (0xc0009e2000) Stream added, broadcasting: 3\nI0127 13:36:45.562667 959 log.go:172] (0xc00098a0b0) Reply frame received for 3\nI0127 13:36:45.562702 959 log.go:172] (0xc00098a0b0) (0xc0005e2140) Create stream\nI0127 13:36:45.562710 959 log.go:172] (0xc00098a0b0) (0xc0005e2140) Stream added, broadcasting: 5\nI0127 13:36:45.564676 959 log.go:172] (0xc00098a0b0) Reply frame received for 5\nI0127 13:36:45.665780 959 log.go:172] (0xc00098a0b0) Data frame received for 5\nI0127 13:36:45.665883 959 log.go:172] (0xc0005e2140) (5) Data frame handling\nI0127 13:36:45.665904 959 log.go:172] (0xc0005e2140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0127 13:36:45.666033 959 log.go:172] (0xc00098a0b0) Data frame received for 3\nI0127 13:36:45.666042 959 log.go:172] (0xc0009e2000) (3) Data frame handling\nI0127 13:36:45.666053 959 log.go:172] (0xc0009e2000) (3) Data frame sent\nI0127 13:36:45.746417 959 log.go:172] (0xc00098a0b0) (0xc0009e2000) Stream removed, broadcasting: 3\nI0127 13:36:45.746748 959 log.go:172] (0xc00098a0b0) Data frame received for 1\nI0127 13:36:45.746818 959 log.go:172] (0xc0007205a0) (1) Data frame handling\nI0127 13:36:45.746893 959 log.go:172] (0xc0007205a0) (1) Data frame sent\nI0127 13:36:45.747143 959 log.go:172] (0xc00098a0b0) (0xc0007205a0) Stream removed, broadcasting: 1\nI0127 13:36:45.747576 959 log.go:172] (0xc00098a0b0) (0xc0005e2140) Stream removed, broadcasting: 5\nI0127 13:36:45.747794 959 log.go:172] (0xc00098a0b0) Go away received\nI0127 13:36:45.748510 959 log.go:172] (0xc00098a0b0) (0xc0007205a0) Stream removed, broadcasting: 1\nI0127 13:36:45.748542 959 log.go:172] (0xc00098a0b0) (0xc0009e2000) Stream removed, broadcasting: 3\nI0127 13:36:45.748564 959 log.go:172] (0xc00098a0b0) (0xc0005e2140) Stream removed, broadcasting: 5\n" Jan 27 13:36:45.755: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 27 13:36:45.755: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 27 13:36:55.812: INFO: Waiting for StatefulSet statefulset-3395/ss2 to complete update Jan 27 13:36:55.812: INFO: Waiting for Pod statefulset-3395/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 27 13:36:55.812: INFO: Waiting for Pod statefulset-3395/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 27 13:36:55.812: INFO: Waiting for Pod statefulset-3395/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 27 13:37:06.465: INFO: Waiting for StatefulSet statefulset-3395/ss2 to complete update Jan 27 13:37:06.466: INFO: Waiting for Pod statefulset-3395/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 27 13:37:06.466: INFO: Waiting for Pod statefulset-3395/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 27 13:37:15.834: INFO: Waiting for StatefulSet statefulset-3395/ss2 to complete update Jan 27 13:37:15.834: INFO: Waiting for Pod statefulset-3395/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 27 13:37:25.831: INFO: Waiting for StatefulSet statefulset-3395/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 27 13:37:35.836: INFO: Deleting all statefulset in ns statefulset-3395 Jan 27 13:37:35.841: INFO: Scaling statefulset ss2 to 0 Jan 27 13:38:16.301: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 13:38:16.312: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:38:16.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3395" for this suite. Jan 27 13:38:24.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:38:24.555: INFO: namespace statefulset-3395 deletion completed in 8.197378882s • [SLOW TEST:223.209 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:38:24.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jan 27 13:38:34.809: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 27 13:38:44.979: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:38:44.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1713" for this suite. Jan 27 13:38:51.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:38:51.142: INFO: namespace pods-1713 deletion completed in 6.151481653s • [SLOW TEST:26.585 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:38:51.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 27 13:38:52.050: INFO: Pod name wrapped-volume-race-7f601278-9de1-422a-a87d-d2f3a78b647e: Found 0 pods out of 5 Jan 27 13:38:57.060: INFO: Pod name wrapped-volume-race-7f601278-9de1-422a-a87d-d2f3a78b647e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7f601278-9de1-422a-a87d-d2f3a78b647e in namespace emptydir-wrapper-1588, will wait for the garbage collector to delete the pods Jan 27 13:39:27.272: INFO: Deleting ReplicationController wrapped-volume-race-7f601278-9de1-422a-a87d-d2f3a78b647e took: 31.321728ms Jan 27 13:39:27.673: INFO: Terminating ReplicationController wrapped-volume-race-7f601278-9de1-422a-a87d-d2f3a78b647e pods took: 400.607851ms STEP: Creating RC which spawns configmap-volume pods Jan 27 13:40:17.186: INFO: Pod name wrapped-volume-race-876b0e89-6e6a-4727-a4f0-3f2b85b8ac8d: Found 0 pods out of 5 Jan 27 13:40:22.209: INFO: Pod name wrapped-volume-race-876b0e89-6e6a-4727-a4f0-3f2b85b8ac8d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-876b0e89-6e6a-4727-a4f0-3f2b85b8ac8d in namespace emptydir-wrapper-1588, will wait for the garbage collector to delete the pods Jan 27 13:40:54.332: INFO: Deleting ReplicationController wrapped-volume-race-876b0e89-6e6a-4727-a4f0-3f2b85b8ac8d took: 22.665011ms Jan 27 13:40:54.733: INFO: Terminating ReplicationController wrapped-volume-race-876b0e89-6e6a-4727-a4f0-3f2b85b8ac8d pods took: 400.678897ms STEP: Creating RC which spawns configmap-volume pods Jan 27 13:41:39.780: INFO: Pod name wrapped-volume-race-7cc489ff-91f4-4168-8fd6-a78935b33a99: Found 0 pods out of 5 Jan 27 13:41:44.791: INFO: Pod name wrapped-volume-race-7cc489ff-91f4-4168-8fd6-a78935b33a99: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7cc489ff-91f4-4168-8fd6-a78935b33a99 in namespace emptydir-wrapper-1588, will wait for the garbage collector to delete the pods Jan 27 13:42:20.924: INFO: Deleting ReplicationController wrapped-volume-race-7cc489ff-91f4-4168-8fd6-a78935b33a99 took: 29.533002ms Jan 27 13:42:21.324: INFO: Terminating ReplicationController wrapped-volume-race-7cc489ff-91f4-4168-8fd6-a78935b33a99 pods took: 400.407732ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:43:07.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1588" for this suite. Jan 27 13:43:17.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:43:18.089: INFO: namespace emptydir-wrapper-1588 deletion completed in 10.147501216s • [SLOW TEST:266.947 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:43:18.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jan 27 13:43:18.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2784' Jan 27 13:43:18.684: INFO: stderr: "" Jan 27 13:43:18.684: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 27 13:43:18.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2784' Jan 27 13:43:18.952: INFO: stderr: "" Jan 27 13:43:18.952: INFO: stdout: "update-demo-nautilus-smmz8 update-demo-nautilus-wjksb " Jan 27 13:43:18.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-smmz8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2784' Jan 27 13:43:19.177: INFO: stderr: "" Jan 27 13:43:19.177: INFO: stdout: "" Jan 27 13:43:19.177: INFO: update-demo-nautilus-smmz8 is created but not running Jan 27 13:43:24.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2784' Jan 27 13:43:24.314: INFO: stderr: "" Jan 27 13:43:24.314: INFO: stdout: "update-demo-nautilus-smmz8 update-demo-nautilus-wjksb " Jan 27 13:43:24.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-smmz8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2784' Jan 27 13:43:24.406: INFO: stderr: "" Jan 27 13:43:24.406: INFO: stdout: "" Jan 27 13:43:24.406: INFO: update-demo-nautilus-smmz8 is created but not running Jan 27 13:43:29.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2784' Jan 27 13:43:29.552: INFO: stderr: "" Jan 27 13:43:29.553: INFO: stdout: "update-demo-nautilus-smmz8 update-demo-nautilus-wjksb " Jan 27 13:43:29.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-smmz8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2784' Jan 27 13:43:29.674: INFO: stderr: "" Jan 27 13:43:29.674: INFO: stdout: "" Jan 27 13:43:29.675: INFO: update-demo-nautilus-smmz8 is created but not running Jan 27 13:43:34.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2784' Jan 27 13:43:34.894: INFO: stderr: "" Jan 27 13:43:34.894: INFO: stdout: "update-demo-nautilus-smmz8 update-demo-nautilus-wjksb " Jan 27 13:43:34.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-smmz8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2784' Jan 27 13:43:34.985: INFO: stderr: "" Jan 27 13:43:34.985: INFO: stdout: "" Jan 27 13:43:34.985: INFO: update-demo-nautilus-smmz8 is created but not running Jan 27 13:43:39.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2784' Jan 27 13:43:40.175: INFO: stderr: "" Jan 27 13:43:40.176: INFO: stdout: "update-demo-nautilus-smmz8 update-demo-nautilus-wjksb " Jan 27 13:43:40.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-smmz8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2784' Jan 27 13:43:40.279: INFO: stderr: "" Jan 27 13:43:40.279: INFO: stdout: "true" Jan 27 13:43:40.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-smmz8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2784' Jan 27 13:43:40.422: INFO: stderr: "" Jan 27 13:43:40.422: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 27 13:43:40.422: INFO: validating pod update-demo-nautilus-smmz8 Jan 27 13:43:40.518: INFO: got data: { "image": "nautilus.jpg" } Jan 27 13:43:40.518: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 27 13:43:40.518: INFO: update-demo-nautilus-smmz8 is verified up and running Jan 27 13:43:40.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjksb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2784' Jan 27 13:43:40.642: INFO: stderr: "" Jan 27 13:43:40.642: INFO: stdout: "true" Jan 27 13:43:40.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjksb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2784' Jan 27 13:43:40.773: INFO: stderr: "" Jan 27 13:43:40.773: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 27 13:43:40.773: INFO: validating pod update-demo-nautilus-wjksb Jan 27 13:43:40.804: INFO: got data: { "image": "nautilus.jpg" } Jan 27 13:43:40.804: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 27 13:43:40.804: INFO: update-demo-nautilus-wjksb is verified up and running STEP: rolling-update to new replication controller Jan 27 13:43:40.808: INFO: scanned /root for discovery docs: Jan 27 13:43:40.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-2784' Jan 27 13:44:10.994: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 27 13:44:10.994: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 27 13:44:10.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2784' Jan 27 13:44:11.133: INFO: stderr: "" Jan 27 13:44:11.133: INFO: stdout: "update-demo-kitten-q2dhm update-demo-kitten-s6twm " Jan 27 13:44:11.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-q2dhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2784' Jan 27 13:44:11.217: INFO: stderr: "" Jan 27 13:44:11.217: INFO: stdout: "true" Jan 27 13:44:11.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-q2dhm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2784' Jan 27 13:44:11.306: INFO: stderr: "" Jan 27 13:44:11.306: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 27 13:44:11.306: INFO: validating pod update-demo-kitten-q2dhm Jan 27 13:44:11.330: INFO: got data: { "image": "kitten.jpg" } Jan 27 13:44:11.330: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 27 13:44:11.330: INFO: update-demo-kitten-q2dhm is verified up and running Jan 27 13:44:11.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-s6twm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2784' Jan 27 13:44:11.416: INFO: stderr: "" Jan 27 13:44:11.416: INFO: stdout: "true" Jan 27 13:44:11.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-s6twm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2784' Jan 27 13:44:11.527: INFO: stderr: "" Jan 27 13:44:11.527: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 27 13:44:11.527: INFO: validating pod update-demo-kitten-s6twm Jan 27 13:44:11.557: INFO: got data: { "image": "kitten.jpg" } Jan 27 13:44:11.557: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 27 13:44:11.557: INFO: update-demo-kitten-s6twm is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:44:11.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2784" for this suite. Jan 27 13:44:35.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:44:35.723: INFO: namespace kubectl-2784 deletion completed in 24.161073711s • [SLOW TEST:77.634 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:44:35.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-h94f STEP: Creating a pod to test atomic-volume-subpath Jan 27 13:44:35.812: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-h94f" in namespace "subpath-2699" to be "success or failure" Jan 27 13:44:35.819: INFO: Pod "pod-subpath-test-configmap-h94f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.194197ms Jan 27 13:44:37.831: INFO: Pod "pod-subpath-test-configmap-h94f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019543286s Jan 27 13:44:39.859: INFO: Pod "pod-subpath-test-configmap-h94f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047426805s Jan 27 13:44:41.892: INFO: Pod "pod-subpath-test-configmap-h94f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079962091s Jan 27 13:44:43.973: INFO: Pod "pod-subpath-test-configmap-h94f": Phase="Running", Reason="", readiness=true. Elapsed: 8.16068542s Jan 27 13:44:45.982: INFO: Pod "pod-subpath-test-configmap-h94f": Phase="Running", Reason="", readiness=true. Elapsed: 10.17006864s Jan 27 13:44:47.988: INFO: Pod "pod-subpath-test-configmap-h94f": Phase="Running", Reason="", readiness=true. Elapsed: 12.176527685s Jan 27 13:44:49.998: INFO: Pod "pod-subpath-test-configmap-h94f": Phase="Running", Reason="", readiness=true. Elapsed: 14.186081154s Jan 27 13:44:52.010: INFO: Pod "pod-subpath-test-configmap-h94f": Phase="Running", Reason="", readiness=true. Elapsed: 16.197795868s Jan 27 13:44:54.019: INFO: Pod "pod-subpath-test-configmap-h94f": Phase="Running", Reason="", readiness=true. Elapsed: 18.206944141s Jan 27 13:44:56.027: INFO: Pod "pod-subpath-test-configmap-h94f": Phase="Running", Reason="", readiness=true. Elapsed: 20.21524165s Jan 27 13:44:58.038: INFO: Pod "pod-subpath-test-configmap-h94f": Phase="Running", Reason="", readiness=true. Elapsed: 22.2256234s Jan 27 13:45:00.047: INFO: Pod "pod-subpath-test-configmap-h94f": Phase="Running", Reason="", readiness=true. Elapsed: 24.235356831s Jan 27 13:45:02.060: INFO: Pod "pod-subpath-test-configmap-h94f": Phase="Running", Reason="", readiness=true. Elapsed: 26.248023459s Jan 27 13:45:04.120: INFO: Pod "pod-subpath-test-configmap-h94f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.308233842s STEP: Saw pod success Jan 27 13:45:04.120: INFO: Pod "pod-subpath-test-configmap-h94f" satisfied condition "success or failure" Jan 27 13:45:04.126: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-h94f container test-container-subpath-configmap-h94f: STEP: delete the pod Jan 27 13:45:04.404: INFO: Waiting for pod pod-subpath-test-configmap-h94f to disappear Jan 27 13:45:04.414: INFO: Pod pod-subpath-test-configmap-h94f no longer exists STEP: Deleting pod pod-subpath-test-configmap-h94f Jan 27 13:45:04.414: INFO: Deleting pod "pod-subpath-test-configmap-h94f" in namespace "subpath-2699" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:45:04.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2699" for this suite. Jan 27 13:45:10.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:45:10.684: INFO: namespace subpath-2699 deletion completed in 6.25776006s • [SLOW TEST:34.961 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:45:10.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 27 13:45:10.849: INFO: Waiting up to 5m0s for pod "pod-32bb4881-f929-4ec9-89a0-c61b7fe6998b" in namespace "emptydir-8926" to be "success or failure" Jan 27 13:45:10.855: INFO: Pod "pod-32bb4881-f929-4ec9-89a0-c61b7fe6998b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.375637ms Jan 27 13:45:12.930: INFO: Pod "pod-32bb4881-f929-4ec9-89a0-c61b7fe6998b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0802208s Jan 27 13:45:14.940: INFO: Pod "pod-32bb4881-f929-4ec9-89a0-c61b7fe6998b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090901432s Jan 27 13:45:16.948: INFO: Pod "pod-32bb4881-f929-4ec9-89a0-c61b7fe6998b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09891078s Jan 27 13:45:18.958: INFO: Pod "pod-32bb4881-f929-4ec9-89a0-c61b7fe6998b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10849553s Jan 27 13:45:20.970: INFO: Pod "pod-32bb4881-f929-4ec9-89a0-c61b7fe6998b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.120167265s STEP: Saw pod success Jan 27 13:45:20.970: INFO: Pod "pod-32bb4881-f929-4ec9-89a0-c61b7fe6998b" satisfied condition "success or failure" Jan 27 13:45:20.975: INFO: Trying to get logs from node iruya-node pod pod-32bb4881-f929-4ec9-89a0-c61b7fe6998b container test-container: STEP: delete the pod Jan 27 13:45:21.045: INFO: Waiting for pod pod-32bb4881-f929-4ec9-89a0-c61b7fe6998b to disappear Jan 27 13:45:21.124: INFO: Pod pod-32bb4881-f929-4ec9-89a0-c61b7fe6998b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:45:21.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8926" for this suite. Jan 27 13:45:27.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:45:27.316: INFO: namespace emptydir-8926 deletion completed in 6.182492715s • [SLOW TEST:16.631 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:45:27.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-df380f7c-f80a-44d4-ab25-efded64427c4 STEP: Creating a pod to test consume secrets Jan 27 13:45:27.489: INFO: Waiting up to 5m0s for pod "pod-secrets-c677099d-086d-44c1-b449-c9c5b9c3e160" in namespace "secrets-3083" to be "success or failure" Jan 27 13:45:27.575: INFO: Pod "pod-secrets-c677099d-086d-44c1-b449-c9c5b9c3e160": Phase="Pending", Reason="", readiness=false. Elapsed: 86.020479ms Jan 27 13:45:29.583: INFO: Pod "pod-secrets-c677099d-086d-44c1-b449-c9c5b9c3e160": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094032637s Jan 27 13:45:31.591: INFO: Pod "pod-secrets-c677099d-086d-44c1-b449-c9c5b9c3e160": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101943608s Jan 27 13:45:33.604: INFO: Pod "pod-secrets-c677099d-086d-44c1-b449-c9c5b9c3e160": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11482335s Jan 27 13:45:35.612: INFO: Pod "pod-secrets-c677099d-086d-44c1-b449-c9c5b9c3e160": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122767873s Jan 27 13:45:37.620: INFO: Pod "pod-secrets-c677099d-086d-44c1-b449-c9c5b9c3e160": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.131073788s STEP: Saw pod success Jan 27 13:45:37.620: INFO: Pod "pod-secrets-c677099d-086d-44c1-b449-c9c5b9c3e160" satisfied condition "success or failure" Jan 27 13:45:37.627: INFO: Trying to get logs from node iruya-node pod pod-secrets-c677099d-086d-44c1-b449-c9c5b9c3e160 container secret-volume-test: STEP: delete the pod Jan 27 13:45:37.788: INFO: Waiting for pod pod-secrets-c677099d-086d-44c1-b449-c9c5b9c3e160 to disappear Jan 27 13:45:37.806: INFO: Pod pod-secrets-c677099d-086d-44c1-b449-c9c5b9c3e160 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:45:37.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3083" for this suite. Jan 27 13:45:43.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:45:44.113: INFO: namespace secrets-3083 deletion completed in 6.292333568s • [SLOW TEST:16.797 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:45:44.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 27 13:45:44.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-394' Jan 27 13:45:46.099: INFO: stderr: "" Jan 27 13:45:46.100: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jan 27 13:45:56.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-394 -o json' Jan 27 13:45:56.302: INFO: stderr: "" Jan 27 13:45:56.302: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-27T13:45:46Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-394\",\n \"resourceVersion\": \"22066468\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-394/pods/e2e-test-nginx-pod\",\n \"uid\": \"93c9d1ae-4cea-4590-80d8-f9556261ac01\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-n4pcw\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-n4pcw\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-n4pcw\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-27T13:45:46Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-27T13:45:52Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-27T13:45:52Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-27T13:45:46Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://4cc21040d0a387218c136c518908f1f9581204761e1f37932c70c5afd126c243\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-27T13:45:52Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-27T13:45:46Z\"\n }\n}\n" STEP: replace the image in the pod Jan 27 13:45:56.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-394' Jan 27 13:45:56.627: INFO: stderr: "" Jan 27 13:45:56.627: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Jan 27 13:45:56.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-394' Jan 27 13:46:05.213: INFO: stderr: "" Jan 27 13:46:05.213: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:46:05.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-394" for this suite. Jan 27 13:46:11.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:46:11.324: INFO: namespace kubectl-394 deletion completed in 6.100281856s • [SLOW TEST:27.210 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:46:11.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 27 13:46:11.435: INFO: Waiting up to 5m0s for pod "pod-e5344298-344c-4d35-927e-a198ac80cce5" in namespace "emptydir-76" to be "success or failure" Jan 27 13:46:11.447: INFO: Pod "pod-e5344298-344c-4d35-927e-a198ac80cce5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.546321ms Jan 27 13:46:13.451: INFO: Pod "pod-e5344298-344c-4d35-927e-a198ac80cce5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016368248s Jan 27 13:46:15.467: INFO: Pod "pod-e5344298-344c-4d35-927e-a198ac80cce5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031899693s Jan 27 13:46:17.474: INFO: Pod "pod-e5344298-344c-4d35-927e-a198ac80cce5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038994358s Jan 27 13:46:19.482: INFO: Pod "pod-e5344298-344c-4d35-927e-a198ac80cce5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046863498s Jan 27 13:46:21.492: INFO: Pod "pod-e5344298-344c-4d35-927e-a198ac80cce5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057112471s STEP: Saw pod success Jan 27 13:46:21.492: INFO: Pod "pod-e5344298-344c-4d35-927e-a198ac80cce5" satisfied condition "success or failure" Jan 27 13:46:21.497: INFO: Trying to get logs from node iruya-node pod pod-e5344298-344c-4d35-927e-a198ac80cce5 container test-container: STEP: delete the pod Jan 27 13:46:21.689: INFO: Waiting for pod pod-e5344298-344c-4d35-927e-a198ac80cce5 to disappear Jan 27 13:46:21.703: INFO: Pod pod-e5344298-344c-4d35-927e-a198ac80cce5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:46:21.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-76" for this suite. Jan 27 13:46:27.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:46:27.911: INFO: namespace emptydir-76 deletion completed in 6.202074541s • [SLOW TEST:16.586 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:46:27.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-6add99e8-ffee-4633-b3b6-7d553faa7c8f STEP: Creating a pod to test consume secrets Jan 27 13:46:28.047: INFO: Waiting up to 5m0s for pod "pod-secrets-daba3694-1816-4b4d-92bd-15bf3df08d71" in namespace "secrets-2671" to be "success or failure" Jan 27 13:46:28.069: INFO: Pod "pod-secrets-daba3694-1816-4b4d-92bd-15bf3df08d71": Phase="Pending", Reason="", readiness=false. Elapsed: 21.854641ms Jan 27 13:46:30.085: INFO: Pod "pod-secrets-daba3694-1816-4b4d-92bd-15bf3df08d71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037644423s Jan 27 13:46:32.101: INFO: Pod "pod-secrets-daba3694-1816-4b4d-92bd-15bf3df08d71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053383477s Jan 27 13:46:34.111: INFO: Pod "pod-secrets-daba3694-1816-4b4d-92bd-15bf3df08d71": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063797465s Jan 27 13:46:36.141: INFO: Pod "pod-secrets-daba3694-1816-4b4d-92bd-15bf3df08d71": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093270949s Jan 27 13:46:38.155: INFO: Pod "pod-secrets-daba3694-1816-4b4d-92bd-15bf3df08d71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108134812s STEP: Saw pod success Jan 27 13:46:38.156: INFO: Pod "pod-secrets-daba3694-1816-4b4d-92bd-15bf3df08d71" satisfied condition "success or failure" Jan 27 13:46:38.162: INFO: Trying to get logs from node iruya-node pod pod-secrets-daba3694-1816-4b4d-92bd-15bf3df08d71 container secret-volume-test: STEP: delete the pod Jan 27 13:46:38.265: INFO: Waiting for pod pod-secrets-daba3694-1816-4b4d-92bd-15bf3df08d71 to disappear Jan 27 13:46:38.403: INFO: Pod pod-secrets-daba3694-1816-4b4d-92bd-15bf3df08d71 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:46:38.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2671" for this suite. Jan 27 13:46:44.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:46:44.580: INFO: namespace secrets-2671 deletion completed in 6.167563117s • [SLOW TEST:16.669 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:46:44.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:46:53.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7674" for this suite. Jan 27 13:47:15.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:47:15.922: INFO: namespace replication-controller-7674 deletion completed in 22.156736584s • [SLOW TEST:31.341 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:47:15.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-lqmp STEP: Creating a pod to test atomic-volume-subpath Jan 27 13:47:16.015: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lqmp" in namespace "subpath-3113" to be "success or failure" Jan 27 13:47:16.083: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Pending", Reason="", readiness=false. Elapsed: 67.687869ms Jan 27 13:47:18.091: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075888586s Jan 27 13:47:20.098: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083466069s Jan 27 13:47:22.134: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119174777s Jan 27 13:47:24.143: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Running", Reason="", readiness=true. Elapsed: 8.128087416s Jan 27 13:47:26.155: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Running", Reason="", readiness=true. Elapsed: 10.140471751s Jan 27 13:47:28.166: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Running", Reason="", readiness=true. Elapsed: 12.150595641s Jan 27 13:47:30.173: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Running", Reason="", readiness=true. Elapsed: 14.158216727s Jan 27 13:47:32.180: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Running", Reason="", readiness=true. Elapsed: 16.164856407s Jan 27 13:47:34.188: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Running", Reason="", readiness=true. Elapsed: 18.173020102s Jan 27 13:47:36.197: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Running", Reason="", readiness=true. Elapsed: 20.181524395s Jan 27 13:47:38.206: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Running", Reason="", readiness=true. Elapsed: 22.190805844s Jan 27 13:47:40.213: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Running", Reason="", readiness=true. Elapsed: 24.197964405s Jan 27 13:47:42.223: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Running", Reason="", readiness=true. Elapsed: 26.208240188s Jan 27 13:47:44.230: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Running", Reason="", readiness=true. Elapsed: 28.214783462s Jan 27 13:47:46.242: INFO: Pod "pod-subpath-test-projected-lqmp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.226986838s STEP: Saw pod success Jan 27 13:47:46.242: INFO: Pod "pod-subpath-test-projected-lqmp" satisfied condition "success or failure" Jan 27 13:47:46.248: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-lqmp container test-container-subpath-projected-lqmp: STEP: delete the pod Jan 27 13:47:46.813: INFO: Waiting for pod pod-subpath-test-projected-lqmp to disappear Jan 27 13:47:46.824: INFO: Pod pod-subpath-test-projected-lqmp no longer exists STEP: Deleting pod pod-subpath-test-projected-lqmp Jan 27 13:47:46.824: INFO: Deleting pod "pod-subpath-test-projected-lqmp" in namespace "subpath-3113" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:47:46.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3113" for this suite. Jan 27 13:47:52.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:47:52.974: INFO: namespace subpath-3113 deletion completed in 6.137914812s • [SLOW TEST:37.050 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:47:52.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jan 27 13:47:53.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1378' Jan 27 13:47:53.439: INFO: stderr: "" Jan 27 13:47:53.439: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 27 13:47:53.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1378' Jan 27 13:47:53.560: INFO: stderr: "" Jan 27 13:47:53.560: INFO: stdout: "update-demo-nautilus-l4p7g update-demo-nautilus-v4znl " Jan 27 13:47:53.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l4p7g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1378' Jan 27 13:47:53.659: INFO: stderr: "" Jan 27 13:47:53.659: INFO: stdout: "" Jan 27 13:47:53.659: INFO: update-demo-nautilus-l4p7g is created but not running Jan 27 13:47:58.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1378' Jan 27 13:47:59.329: INFO: stderr: "" Jan 27 13:47:59.329: INFO: stdout: "update-demo-nautilus-l4p7g update-demo-nautilus-v4znl " Jan 27 13:47:59.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l4p7g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1378' Jan 27 13:47:59.799: INFO: stderr: "" Jan 27 13:47:59.800: INFO: stdout: "" Jan 27 13:47:59.800: INFO: update-demo-nautilus-l4p7g is created but not running Jan 27 13:48:04.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1378' Jan 27 13:48:04.971: INFO: stderr: "" Jan 27 13:48:04.971: INFO: stdout: "update-demo-nautilus-l4p7g update-demo-nautilus-v4znl " Jan 27 13:48:04.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l4p7g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1378' Jan 27 13:48:05.077: INFO: stderr: "" Jan 27 13:48:05.077: INFO: stdout: "true" Jan 27 13:48:05.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l4p7g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1378' Jan 27 13:48:05.160: INFO: stderr: "" Jan 27 13:48:05.160: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 27 13:48:05.160: INFO: validating pod update-demo-nautilus-l4p7g Jan 27 13:48:05.177: INFO: got data: { "image": "nautilus.jpg" } Jan 27 13:48:05.177: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 27 13:48:05.177: INFO: update-demo-nautilus-l4p7g is verified up and running Jan 27 13:48:05.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v4znl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1378' Jan 27 13:48:05.268: INFO: stderr: "" Jan 27 13:48:05.268: INFO: stdout: "true" Jan 27 13:48:05.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v4znl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1378' Jan 27 13:48:05.366: INFO: stderr: "" Jan 27 13:48:05.366: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 27 13:48:05.366: INFO: validating pod update-demo-nautilus-v4znl Jan 27 13:48:05.373: INFO: got data: { "image": "nautilus.jpg" } Jan 27 13:48:05.373: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 27 13:48:05.373: INFO: update-demo-nautilus-v4znl is verified up and running STEP: scaling down the replication controller Jan 27 13:48:05.375: INFO: scanned /root for discovery docs: Jan 27 13:48:05.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1378' Jan 27 13:48:06.537: INFO: stderr: "" Jan 27 13:48:06.537: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 27 13:48:06.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1378' Jan 27 13:48:06.667: INFO: stderr: "" Jan 27 13:48:06.667: INFO: stdout: "update-demo-nautilus-l4p7g update-demo-nautilus-v4znl " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 27 13:48:11.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1378' Jan 27 13:48:11.846: INFO: stderr: "" Jan 27 13:48:11.846: INFO: stdout: "update-demo-nautilus-l4p7g update-demo-nautilus-v4znl " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 27 13:48:16.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1378' Jan 27 13:48:16.985: INFO: stderr: "" Jan 27 13:48:16.985: INFO: stdout: "update-demo-nautilus-v4znl " Jan 27 13:48:16.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v4znl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1378' Jan 27 13:48:17.094: INFO: stderr: "" Jan 27 13:48:17.094: INFO: stdout: "true" Jan 27 13:48:17.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v4znl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1378' Jan 27 13:48:17.201: INFO: stderr: "" Jan 27 13:48:17.201: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 27 13:48:17.201: INFO: validating pod update-demo-nautilus-v4znl Jan 27 13:48:17.219: INFO: got data: { "image": "nautilus.jpg" } Jan 27 13:48:17.219: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 27 13:48:17.219: INFO: update-demo-nautilus-v4znl is verified up and running STEP: scaling up the replication controller Jan 27 13:48:17.222: INFO: scanned /root for discovery docs: Jan 27 13:48:17.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1378' Jan 27 13:48:18.374: INFO: stderr: "" Jan 27 13:48:18.374: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 27 13:48:18.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1378' Jan 27 13:48:18.545: INFO: stderr: "" Jan 27 13:48:18.545: INFO: stdout: "update-demo-nautilus-m77f8 update-demo-nautilus-v4znl " Jan 27 13:48:18.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m77f8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1378' Jan 27 13:48:18.643: INFO: stderr: "" Jan 27 13:48:18.643: INFO: stdout: "" Jan 27 13:48:18.643: INFO: update-demo-nautilus-m77f8 is created but not running Jan 27 13:48:23.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1378' Jan 27 13:48:23.799: INFO: stderr: "" Jan 27 13:48:23.799: INFO: stdout: "update-demo-nautilus-m77f8 update-demo-nautilus-v4znl " Jan 27 13:48:23.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m77f8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1378' Jan 27 13:48:23.922: INFO: stderr: "" Jan 27 13:48:23.922: INFO: stdout: "" Jan 27 13:48:23.922: INFO: update-demo-nautilus-m77f8 is created but not running Jan 27 13:48:28.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1378' Jan 27 13:48:29.027: INFO: stderr: "" Jan 27 13:48:29.028: INFO: stdout: "update-demo-nautilus-m77f8 update-demo-nautilus-v4znl " Jan 27 13:48:29.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m77f8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1378' Jan 27 13:48:29.148: INFO: stderr: "" Jan 27 13:48:29.148: INFO: stdout: "true" Jan 27 13:48:29.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m77f8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1378' Jan 27 13:48:29.233: INFO: stderr: "" Jan 27 13:48:29.233: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 27 13:48:29.233: INFO: validating pod update-demo-nautilus-m77f8 Jan 27 13:48:29.251: INFO: got data: { "image": "nautilus.jpg" } Jan 27 13:48:29.251: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 27 13:48:29.251: INFO: update-demo-nautilus-m77f8 is verified up and running Jan 27 13:48:29.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v4znl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1378' Jan 27 13:48:29.359: INFO: stderr: "" Jan 27 13:48:29.360: INFO: stdout: "true" Jan 27 13:48:29.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v4znl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1378' Jan 27 13:48:29.480: INFO: stderr: "" Jan 27 13:48:29.480: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 27 13:48:29.480: INFO: validating pod update-demo-nautilus-v4znl Jan 27 13:48:29.485: INFO: got data: { "image": "nautilus.jpg" } Jan 27 13:48:29.485: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 27 13:48:29.485: INFO: update-demo-nautilus-v4znl is verified up and running STEP: using delete to clean up resources Jan 27 13:48:29.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1378' Jan 27 13:48:29.611: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 27 13:48:29.611: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 27 13:48:29.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1378' Jan 27 13:48:29.726: INFO: stderr: "No resources found.\n" Jan 27 13:48:29.726: INFO: stdout: "" Jan 27 13:48:29.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1378 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 27 13:48:29.814: INFO: stderr: "" Jan 27 13:48:29.815: INFO: stdout: "update-demo-nautilus-m77f8\nupdate-demo-nautilus-v4znl\n" Jan 27 13:48:30.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1378' Jan 27 13:48:30.438: INFO: stderr: "No resources found.\n" Jan 27 13:48:30.439: INFO: stdout: "" Jan 27 13:48:30.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1378 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 27 13:48:30.588: INFO: stderr: "" Jan 27 13:48:30.588: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:48:30.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1378" for this suite. Jan 27 13:48:53.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:48:53.941: INFO: namespace kubectl-1378 deletion completed in 23.345119168s • [SLOW TEST:60.968 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:48:53.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8939 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8939 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8939 Jan 27 13:48:54.103: INFO: Found 0 stateful pods, waiting for 1 Jan 27 13:49:04.115: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 27 13:49:04.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8939 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 13:49:05.168: INFO: stderr: "I0127 13:49:04.759473 2055 log.go:172] (0xc0009ce0b0) (0xc0007f46e0) Create stream\nI0127 13:49:04.759856 2055 log.go:172] (0xc0009ce0b0) (0xc0007f46e0) Stream added, broadcasting: 1\nI0127 13:49:04.766626 2055 log.go:172] (0xc0009ce0b0) Reply frame received for 1\nI0127 13:49:04.766715 2055 log.go:172] (0xc0009ce0b0) (0xc0005ae280) Create stream\nI0127 13:49:04.766723 2055 log.go:172] (0xc0009ce0b0) (0xc0005ae280) Stream added, broadcasting: 3\nI0127 13:49:04.768817 2055 log.go:172] (0xc0009ce0b0) Reply frame received for 3\nI0127 13:49:04.768850 2055 log.go:172] (0xc0009ce0b0) (0xc0007f4780) Create stream\nI0127 13:49:04.768858 2055 log.go:172] (0xc0009ce0b0) (0xc0007f4780) Stream added, broadcasting: 5\nI0127 13:49:04.770605 2055 log.go:172] (0xc0009ce0b0) Reply frame received for 5\nI0127 13:49:04.911635 2055 log.go:172] (0xc0009ce0b0) Data frame received for 5\nI0127 13:49:04.911712 2055 log.go:172] (0xc0007f4780) (5) Data frame handling\nI0127 13:49:04.911740 2055 log.go:172] (0xc0007f4780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0127 13:49:04.972015 2055 log.go:172] (0xc0009ce0b0) Data frame received for 3\nI0127 13:49:04.972155 2055 log.go:172] (0xc0005ae280) (3) Data frame handling\nI0127 13:49:04.972185 2055 log.go:172] (0xc0005ae280) (3) Data frame sent\nI0127 13:49:05.154679 2055 log.go:172] (0xc0009ce0b0) Data frame received for 1\nI0127 13:49:05.155000 2055 log.go:172] (0xc0009ce0b0) (0xc0007f4780) Stream removed, broadcasting: 5\nI0127 13:49:05.155079 2055 log.go:172] (0xc0007f46e0) (1) Data frame handling\nI0127 13:49:05.155116 2055 log.go:172] (0xc0007f46e0) (1) Data frame sent\nI0127 13:49:05.155193 2055 log.go:172] (0xc0009ce0b0) (0xc0005ae280) Stream removed, broadcasting: 3\nI0127 13:49:05.155236 2055 log.go:172] (0xc0009ce0b0) (0xc0007f46e0) Stream removed, broadcasting: 1\nI0127 13:49:05.155250 2055 log.go:172] (0xc0009ce0b0) Go away received\nI0127 13:49:05.157524 2055 log.go:172] (0xc0009ce0b0) (0xc0007f46e0) Stream removed, broadcasting: 1\nI0127 13:49:05.157575 2055 log.go:172] (0xc0009ce0b0) (0xc0005ae280) Stream removed, broadcasting: 3\nI0127 13:49:05.157589 2055 log.go:172] (0xc0009ce0b0) (0xc0007f4780) Stream removed, broadcasting: 5\n" Jan 27 13:49:05.168: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 13:49:05.168: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 27 13:49:05.188: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 13:49:05.188: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 13:49:05.218: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999492s Jan 27 13:49:06.261: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991486378s Jan 27 13:49:07.271: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.949030153s Jan 27 13:49:08.285: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.938488144s Jan 27 13:49:09.296: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.924799862s Jan 27 13:49:10.311: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.9132196s Jan 27 13:49:11.324: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.898710586s Jan 27 13:49:12.332: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.886046659s Jan 27 13:49:13.354: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.877128959s Jan 27 13:49:14.366: INFO: Verifying statefulset ss doesn't scale past 1 for another 855.501352ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8939 Jan 27 13:49:15.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8939 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:49:15.900: INFO: stderr: "I0127 13:49:15.602088 2075 log.go:172] (0xc000944420) (0xc000610640) Create stream\nI0127 13:49:15.602347 2075 log.go:172] (0xc000944420) (0xc000610640) Stream added, broadcasting: 1\nI0127 13:49:15.609930 2075 log.go:172] (0xc000944420) Reply frame received for 1\nI0127 13:49:15.610009 2075 log.go:172] (0xc000944420) (0xc000656140) Create stream\nI0127 13:49:15.610021 2075 log.go:172] (0xc000944420) (0xc000656140) Stream added, broadcasting: 3\nI0127 13:49:15.611647 2075 log.go:172] (0xc000944420) Reply frame received for 3\nI0127 13:49:15.611691 2075 log.go:172] (0xc000944420) (0xc0006106e0) Create stream\nI0127 13:49:15.611709 2075 log.go:172] (0xc000944420) (0xc0006106e0) Stream added, broadcasting: 5\nI0127 13:49:15.613392 2075 log.go:172] (0xc000944420) Reply frame received for 5\nI0127 13:49:15.728401 2075 log.go:172] (0xc000944420) Data frame received for 5\nI0127 13:49:15.728497 2075 log.go:172] (0xc0006106e0) (5) Data frame handling\nI0127 13:49:15.728534 2075 log.go:172] (0xc0006106e0) (5) Data frame sent\nI0127 13:49:15.728541 2075 log.go:172] (0xc000944420) Data frame received for 5\nI0127 13:49:15.728545 2075 log.go:172] (0xc0006106e0) (5) Data frame handling\n+ mvI0127 13:49:15.728631 2075 log.go:172] (0xc0006106e0) (5) Data frame sent\nI0127 13:49:15.728647 2075 log.go:172] (0xc000944420) Data frame received for 5\nI0127 13:49:15.728657 2075 log.go:172] (0xc0006106e0) (5) Data frame handling\nI0127 13:49:15.728666 2075 log.go:172] (0xc0006106e0) (5) Data frame sent\nI0127 13:49:15.728675 2075 log.go:172] (0xc000944420) Data frame received for 5\nI0127 13:49:15.728683 2075 log.go:172] (0xc0006106e0) (5) Data frame handling\n -v /tmp/index.htmlI0127 13:49:15.728698 2075 log.go:172] (0xc0006106e0) (5) Data frame sent\nI0127 13:49:15.728707 2075 log.go:172] (0xc000944420) Data frame received for 5\nI0127 13:49:15.728715 2075 log.go:172] (0xc0006106e0) (5) Data frame handling\nI0127 13:49:15.728722 2075 log.go:172] (0xc0006106e0) (5) Data frame sent\n /usr/share/nginx/html/I0127 13:49:15.729880 2075 log.go:172] (0xc000944420) Data frame received for 5\nI0127 13:49:15.729909 2075 log.go:172] (0xc0006106e0) (5) Data frame handling\nI0127 13:49:15.729925 2075 log.go:172] (0xc0006106e0) (5) Data frame sent\n\nI0127 13:49:15.731441 2075 log.go:172] (0xc000944420) Data frame received for 3\nI0127 13:49:15.731469 2075 log.go:172] (0xc000656140) (3) Data frame handling\nI0127 13:49:15.731490 2075 log.go:172] (0xc000656140) (3) Data frame sent\nI0127 13:49:15.892967 2075 log.go:172] (0xc000944420) Data frame received for 1\nI0127 13:49:15.893225 2075 log.go:172] (0xc000944420) (0xc000656140) Stream removed, broadcasting: 3\nI0127 13:49:15.893282 2075 log.go:172] (0xc000610640) (1) Data frame handling\nI0127 13:49:15.893341 2075 log.go:172] (0xc000610640) (1) Data frame sent\nI0127 13:49:15.893510 2075 log.go:172] (0xc000944420) (0xc0006106e0) Stream removed, broadcasting: 5\nI0127 13:49:15.893647 2075 log.go:172] (0xc000944420) (0xc000610640) Stream removed, broadcasting: 1\nI0127 13:49:15.893688 2075 log.go:172] (0xc000944420) Go away received\nI0127 13:49:15.894690 2075 log.go:172] (0xc000944420) (0xc000610640) Stream removed, broadcasting: 1\nI0127 13:49:15.894714 2075 log.go:172] (0xc000944420) (0xc000656140) Stream removed, broadcasting: 3\nI0127 13:49:15.894722 2075 log.go:172] (0xc000944420) (0xc0006106e0) Stream removed, broadcasting: 5\n" Jan 27 13:49:15.900: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 27 13:49:15.900: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 27 13:49:15.910: INFO: Found 1 stateful pods, waiting for 3 Jan 27 13:49:25.931: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:49:25.931: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:49:25.931: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 27 13:49:35.923: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:49:35.923: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 13:49:35.923: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 27 13:49:35.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8939 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 13:49:36.698: INFO: stderr: "I0127 13:49:36.145724 2092 log.go:172] (0xc000146e70) (0xc00079e820) Create stream\nI0127 13:49:36.145999 2092 log.go:172] (0xc000146e70) (0xc00079e820) Stream added, broadcasting: 1\nI0127 13:49:36.167418 2092 log.go:172] (0xc000146e70) Reply frame received for 1\nI0127 13:49:36.167574 2092 log.go:172] (0xc000146e70) (0xc0007b80a0) Create stream\nI0127 13:49:36.167599 2092 log.go:172] (0xc000146e70) (0xc0007b80a0) Stream added, broadcasting: 3\nI0127 13:49:36.170512 2092 log.go:172] (0xc000146e70) Reply frame received for 3\nI0127 13:49:36.170565 2092 log.go:172] (0xc000146e70) (0xc00070e000) Create stream\nI0127 13:49:36.170589 2092 log.go:172] (0xc000146e70) (0xc00070e000) Stream added, broadcasting: 5\nI0127 13:49:36.172964 2092 log.go:172] (0xc000146e70) Reply frame received for 5\nI0127 13:49:36.397611 2092 log.go:172] (0xc000146e70) Data frame received for 5\nI0127 13:49:36.397776 2092 log.go:172] (0xc00070e000) (5) Data frame handling\nI0127 13:49:36.397799 2092 log.go:172] (0xc00070e000) (5) Data frame sent\nI0127 13:49:36.397803 2092 log.go:172] (0xc000146e70) Data frame received for 5\nI0127 13:49:36.397806 2092 log.go:172] (0xc00070e000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0127 13:49:36.397832 2092 log.go:172] (0xc00070e000) (5) Data frame sent\nI0127 13:49:36.397842 2092 log.go:172] (0xc000146e70) Data frame received for 3\nI0127 13:49:36.397851 2092 log.go:172] (0xc0007b80a0) (3) Data frame handling\nI0127 13:49:36.397855 2092 log.go:172] (0xc0007b80a0) (3) Data frame sent\nI0127 13:49:36.684962 2092 log.go:172] (0xc000146e70) Data frame received for 1\nI0127 13:49:36.685153 2092 log.go:172] (0xc000146e70) (0xc00070e000) Stream removed, broadcasting: 5\nI0127 13:49:36.685211 2092 log.go:172] (0xc00079e820) (1) Data frame handling\nI0127 13:49:36.685232 2092 log.go:172] (0xc00079e820) (1) Data frame sent\nI0127 13:49:36.685475 2092 log.go:172] (0xc000146e70) (0xc0007b80a0) Stream removed, broadcasting: 3\nI0127 13:49:36.685611 2092 log.go:172] (0xc000146e70) (0xc00079e820) Stream removed, broadcasting: 1\nI0127 13:49:36.685669 2092 log.go:172] (0xc000146e70) Go away received\nI0127 13:49:36.686492 2092 log.go:172] (0xc000146e70) (0xc00079e820) Stream removed, broadcasting: 1\nI0127 13:49:36.686538 2092 log.go:172] (0xc000146e70) (0xc0007b80a0) Stream removed, broadcasting: 3\nI0127 13:49:36.686617 2092 log.go:172] (0xc000146e70) (0xc00070e000) Stream removed, broadcasting: 5\n" Jan 27 13:49:36.699: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 13:49:36.699: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 27 13:49:36.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8939 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 13:49:37.185: INFO: stderr: "I0127 13:49:36.904095 2114 log.go:172] (0xc0007a20b0) (0xc0007280a0) Create stream\nI0127 13:49:36.904455 2114 log.go:172] (0xc0007a20b0) (0xc0007280a0) Stream added, broadcasting: 1\nI0127 13:49:36.908955 2114 log.go:172] (0xc0007a20b0) Reply frame received for 1\nI0127 13:49:36.909001 2114 log.go:172] (0xc0007a20b0) (0xc00089c000) Create stream\nI0127 13:49:36.909009 2114 log.go:172] (0xc0007a20b0) (0xc00089c000) Stream added, broadcasting: 3\nI0127 13:49:36.910492 2114 log.go:172] (0xc0007a20b0) Reply frame received for 3\nI0127 13:49:36.910513 2114 log.go:172] (0xc0007a20b0) (0xc0005c01e0) Create stream\nI0127 13:49:36.910524 2114 log.go:172] (0xc0007a20b0) (0xc0005c01e0) Stream added, broadcasting: 5\nI0127 13:49:36.911577 2114 log.go:172] (0xc0007a20b0) Reply frame received for 5\nI0127 13:49:37.015682 2114 log.go:172] (0xc0007a20b0) Data frame received for 5\nI0127 13:49:37.015792 2114 log.go:172] (0xc0005c01e0) (5) Data frame handling\nI0127 13:49:37.015812 2114 log.go:172] (0xc0005c01e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0127 13:49:37.053405 2114 log.go:172] (0xc0007a20b0) Data frame received for 3\nI0127 13:49:37.053625 2114 log.go:172] (0xc00089c000) (3) Data frame handling\nI0127 13:49:37.053662 2114 log.go:172] (0xc00089c000) (3) Data frame sent\nI0127 13:49:37.179652 2114 log.go:172] (0xc0007a20b0) (0xc00089c000) Stream removed, broadcasting: 3\nI0127 13:49:37.179769 2114 log.go:172] (0xc0007a20b0) Data frame received for 1\nI0127 13:49:37.179782 2114 log.go:172] (0xc0007280a0) (1) Data frame handling\nI0127 13:49:37.179804 2114 log.go:172] (0xc0007280a0) (1) Data frame sent\nI0127 13:49:37.179892 2114 log.go:172] (0xc0007a20b0) (0xc0007280a0) Stream removed, broadcasting: 1\nI0127 13:49:37.180170 2114 log.go:172] (0xc0007a20b0) (0xc0005c01e0) Stream removed, broadcasting: 5\nI0127 13:49:37.180354 2114 log.go:172] (0xc0007a20b0) Go away received\nI0127 13:49:37.180478 2114 log.go:172] (0xc0007a20b0) (0xc0007280a0) Stream removed, broadcasting: 1\nI0127 13:49:37.180494 2114 log.go:172] (0xc0007a20b0) (0xc00089c000) Stream removed, broadcasting: 3\nI0127 13:49:37.180499 2114 log.go:172] (0xc0007a20b0) (0xc0005c01e0) Stream removed, broadcasting: 5\n" Jan 27 13:49:37.185: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 13:49:37.186: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 27 13:49:37.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8939 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 13:49:37.553: INFO: stderr: "I0127 13:49:37.348369 2128 log.go:172] (0xc000a14420) (0xc00039e820) Create stream\nI0127 13:49:37.348517 2128 log.go:172] (0xc000a14420) (0xc00039e820) Stream added, broadcasting: 1\nI0127 13:49:37.355985 2128 log.go:172] (0xc000a14420) Reply frame received for 1\nI0127 13:49:37.356014 2128 log.go:172] (0xc000a14420) (0xc0009d4000) Create stream\nI0127 13:49:37.356025 2128 log.go:172] (0xc000a14420) (0xc0009d4000) Stream added, broadcasting: 3\nI0127 13:49:37.357160 2128 log.go:172] (0xc000a14420) Reply frame received for 3\nI0127 13:49:37.357182 2128 log.go:172] (0xc000a14420) (0xc00083e000) Create stream\nI0127 13:49:37.357191 2128 log.go:172] (0xc000a14420) (0xc00083e000) Stream added, broadcasting: 5\nI0127 13:49:37.358058 2128 log.go:172] (0xc000a14420) Reply frame received for 5\nI0127 13:49:37.437484 2128 log.go:172] (0xc000a14420) Data frame received for 5\nI0127 13:49:37.437616 2128 log.go:172] (0xc00083e000) (5) Data frame handling\nI0127 13:49:37.437634 2128 log.go:172] (0xc00083e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0127 13:49:37.460097 2128 log.go:172] (0xc000a14420) Data frame received for 3\nI0127 13:49:37.460154 2128 log.go:172] (0xc0009d4000) (3) Data frame handling\nI0127 13:49:37.460165 2128 log.go:172] (0xc0009d4000) (3) Data frame sent\nI0127 13:49:37.543485 2128 log.go:172] (0xc000a14420) Data frame received for 1\nI0127 13:49:37.543653 2128 log.go:172] (0xc000a14420) (0xc00083e000) Stream removed, broadcasting: 5\nI0127 13:49:37.544509 2128 log.go:172] (0xc000a14420) (0xc0009d4000) Stream removed, broadcasting: 3\nI0127 13:49:37.544558 2128 log.go:172] (0xc00039e820) (1) Data frame handling\nI0127 13:49:37.544590 2128 log.go:172] (0xc00039e820) (1) Data frame sent\nI0127 13:49:37.544611 2128 log.go:172] (0xc000a14420) (0xc00039e820) Stream removed, broadcasting: 1\nI0127 13:49:37.544625 2128 log.go:172] (0xc000a14420) Go away received\nI0127 13:49:37.546515 2128 log.go:172] (0xc000a14420) (0xc00039e820) Stream removed, broadcasting: 1\nI0127 13:49:37.546534 2128 log.go:172] (0xc000a14420) (0xc0009d4000) Stream removed, broadcasting: 3\nI0127 13:49:37.546542 2128 log.go:172] (0xc000a14420) (0xc00083e000) Stream removed, broadcasting: 5\n" Jan 27 13:49:37.553: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 13:49:37.553: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 27 13:49:37.553: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 13:49:37.561: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 27 13:49:47.581: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 13:49:47.581: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 27 13:49:47.581: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 27 13:49:47.706: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999324s Jan 27 13:49:48.723: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.889603607s Jan 27 13:49:49.741: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.872520535s Jan 27 13:49:50.751: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.854622848s Jan 27 13:49:51.761: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.844278671s Jan 27 13:49:52.799: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.834355504s Jan 27 13:49:53.815: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.796220492s Jan 27 13:49:54.827: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.780792343s Jan 27 13:49:55.839: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.768787517s Jan 27 13:49:56.851: INFO: Verifying statefulset ss doesn't scale past 3 for another 755.854868ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8939 Jan 27 13:49:57.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8939 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:49:58.396: INFO: stderr: "I0127 13:49:58.072409 2148 log.go:172] (0xc0008c60b0) (0xc000786640) Create stream\nI0127 13:49:58.072604 2148 log.go:172] (0xc0008c60b0) (0xc000786640) Stream added, broadcasting: 1\nI0127 13:49:58.082310 2148 log.go:172] (0xc0008c60b0) Reply frame received for 1\nI0127 13:49:58.082385 2148 log.go:172] (0xc0008c60b0) (0xc000712000) Create stream\nI0127 13:49:58.082400 2148 log.go:172] (0xc0008c60b0) (0xc000712000) Stream added, broadcasting: 3\nI0127 13:49:58.084737 2148 log.go:172] (0xc0008c60b0) Reply frame received for 3\nI0127 13:49:58.084799 2148 log.go:172] (0xc0008c60b0) (0xc0005c6460) Create stream\nI0127 13:49:58.084829 2148 log.go:172] (0xc0008c60b0) (0xc0005c6460) Stream added, broadcasting: 5\nI0127 13:49:58.089121 2148 log.go:172] (0xc0008c60b0) Reply frame received for 5\nI0127 13:49:58.246476 2148 log.go:172] (0xc0008c60b0) Data frame received for 3\nI0127 13:49:58.246708 2148 log.go:172] (0xc000712000) (3) Data frame handling\nI0127 13:49:58.246753 2148 log.go:172] (0xc000712000) (3) Data frame sent\nI0127 13:49:58.246845 2148 log.go:172] (0xc0008c60b0) Data frame received for 5\nI0127 13:49:58.246859 2148 log.go:172] (0xc0005c6460) (5) Data frame handling\nI0127 13:49:58.246871 2148 log.go:172] (0xc0005c6460) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0127 13:49:58.386581 2148 log.go:172] (0xc0008c60b0) Data frame received for 1\nI0127 13:49:58.386740 2148 log.go:172] (0xc0008c60b0) (0xc000712000) Stream removed, broadcasting: 3\nI0127 13:49:58.386835 2148 log.go:172] (0xc000786640) (1) Data frame handling\nI0127 13:49:58.386863 2148 log.go:172] (0xc000786640) (1) Data frame sent\nI0127 13:49:58.386900 2148 log.go:172] (0xc0008c60b0) (0xc0005c6460) Stream removed, broadcasting: 5\nI0127 13:49:58.387055 2148 log.go:172] (0xc0008c60b0) (0xc000786640) Stream removed, broadcasting: 1\nI0127 13:49:58.387076 2148 log.go:172] (0xc0008c60b0) Go away received\nI0127 13:49:58.388330 2148 log.go:172] (0xc0008c60b0) (0xc000786640) Stream removed, broadcasting: 1\nI0127 13:49:58.388347 2148 log.go:172] (0xc0008c60b0) (0xc000712000) Stream removed, broadcasting: 3\nI0127 13:49:58.388355 2148 log.go:172] (0xc0008c60b0) (0xc0005c6460) Stream removed, broadcasting: 5\n" Jan 27 13:49:58.396: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 27 13:49:58.396: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 27 13:49:58.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8939 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:49:58.760: INFO: stderr: "I0127 13:49:58.594156 2163 log.go:172] (0xc0009b80b0) (0xc00083e640) Create stream\nI0127 13:49:58.594460 2163 log.go:172] (0xc0009b80b0) (0xc00083e640) Stream added, broadcasting: 1\nI0127 13:49:58.597323 2163 log.go:172] (0xc0009b80b0) Reply frame received for 1\nI0127 13:49:58.597364 2163 log.go:172] (0xc0009b80b0) (0xc0008b0000) Create stream\nI0127 13:49:58.597373 2163 log.go:172] (0xc0009b80b0) (0xc0008b0000) Stream added, broadcasting: 3\nI0127 13:49:58.598818 2163 log.go:172] (0xc0009b80b0) Reply frame received for 3\nI0127 13:49:58.598844 2163 log.go:172] (0xc0009b80b0) (0xc0007d0280) Create stream\nI0127 13:49:58.598852 2163 log.go:172] (0xc0009b80b0) (0xc0007d0280) Stream added, broadcasting: 5\nI0127 13:49:58.600059 2163 log.go:172] (0xc0009b80b0) Reply frame received for 5\nI0127 13:49:58.673040 2163 log.go:172] (0xc0009b80b0) Data frame received for 3\nI0127 13:49:58.673054 2163 log.go:172] (0xc0008b0000) (3) Data frame handling\nI0127 13:49:58.673062 2163 log.go:172] (0xc0008b0000) (3) Data frame sent\nI0127 13:49:58.673071 2163 log.go:172] (0xc0009b80b0) Data frame received for 5\nI0127 13:49:58.673079 2163 log.go:172] (0xc0007d0280) (5) Data frame handling\nI0127 13:49:58.673087 2163 log.go:172] (0xc0007d0280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0127 13:49:58.749572 2163 log.go:172] (0xc0009b80b0) Data frame received for 1\nI0127 13:49:58.749593 2163 log.go:172] (0xc00083e640) (1) Data frame handling\nI0127 13:49:58.749601 2163 log.go:172] (0xc00083e640) (1) Data frame sent\nI0127 13:49:58.749622 2163 log.go:172] (0xc0009b80b0) (0xc00083e640) Stream removed, broadcasting: 1\nI0127 13:49:58.749636 2163 log.go:172] (0xc0009b80b0) (0xc0008b0000) Stream removed, broadcasting: 3\nI0127 13:49:58.752963 2163 log.go:172] (0xc0009b80b0) (0xc0007d0280) Stream removed, broadcasting: 5\nI0127 13:49:58.752997 2163 log.go:172] (0xc0009b80b0) (0xc00083e640) Stream removed, broadcasting: 1\nI0127 13:49:58.753010 2163 log.go:172] (0xc0009b80b0) (0xc0008b0000) Stream removed, broadcasting: 3\nI0127 13:49:58.753015 2163 log.go:172] (0xc0009b80b0) (0xc0007d0280) Stream removed, broadcasting: 5\n" Jan 27 13:49:58.760: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 27 13:49:58.760: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 27 13:49:58.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8939 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 13:49:59.256: INFO: stderr: "I0127 13:49:58.924159 2183 log.go:172] (0xc0001176b0) (0xc00076b180) Create stream\nI0127 13:49:58.924379 2183 log.go:172] (0xc0001176b0) (0xc00076b180) Stream added, broadcasting: 1\nI0127 13:49:58.935491 2183 log.go:172] (0xc0001176b0) Reply frame received for 1\nI0127 13:49:58.935541 2183 log.go:172] (0xc0001176b0) (0xc00076a320) Create stream\nI0127 13:49:58.935548 2183 log.go:172] (0xc0001176b0) (0xc00076a320) Stream added, broadcasting: 3\nI0127 13:49:58.936873 2183 log.go:172] (0xc0001176b0) Reply frame received for 3\nI0127 13:49:58.936904 2183 log.go:172] (0xc0001176b0) (0xc0001ee000) Create stream\nI0127 13:49:58.936915 2183 log.go:172] (0xc0001176b0) (0xc0001ee000) Stream added, broadcasting: 5\nI0127 13:49:58.938159 2183 log.go:172] (0xc0001176b0) Reply frame received for 5\nI0127 13:49:59.041827 2183 log.go:172] (0xc0001176b0) Data frame received for 5\nI0127 13:49:59.042053 2183 log.go:172] (0xc0001ee000) (5) Data frame handling\nI0127 13:49:59.042104 2183 log.go:172] (0xc0001ee000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0127 13:49:59.042148 2183 log.go:172] (0xc0001176b0) Data frame received for 3\nI0127 13:49:59.042168 2183 log.go:172] (0xc00076a320) (3) Data frame handling\nI0127 13:49:59.042183 2183 log.go:172] (0xc00076a320) (3) Data frame sent\nI0127 13:49:59.245009 2183 log.go:172] (0xc0001176b0) Data frame received for 1\nI0127 13:49:59.245500 2183 log.go:172] (0xc0001176b0) (0xc00076a320) Stream removed, broadcasting: 3\nI0127 13:49:59.245625 2183 log.go:172] (0xc00076b180) (1) Data frame handling\nI0127 13:49:59.245658 2183 log.go:172] (0xc00076b180) (1) Data frame sent\nI0127 13:49:59.245710 2183 log.go:172] (0xc0001176b0) (0xc0001ee000) Stream removed, broadcasting: 5\nI0127 13:49:59.245765 2183 log.go:172] (0xc0001176b0) (0xc00076b180) Stream removed, broadcasting: 1\nI0127 13:49:59.245785 2183 log.go:172] (0xc0001176b0) Go away received\nI0127 13:49:59.247743 2183 log.go:172] (0xc0001176b0) (0xc00076b180) Stream removed, broadcasting: 1\nI0127 13:49:59.248034 2183 log.go:172] (0xc0001176b0) (0xc00076a320) Stream removed, broadcasting: 3\nI0127 13:49:59.248098 2183 log.go:172] (0xc0001176b0) (0xc0001ee000) Stream removed, broadcasting: 5\n" Jan 27 13:49:59.256: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 27 13:49:59.256: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 27 13:49:59.256: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 27 13:50:29.333: INFO: Deleting all statefulset in ns statefulset-8939 Jan 27 13:50:29.341: INFO: Scaling statefulset ss to 0 Jan 27 13:50:29.357: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 13:50:29.361: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:50:29.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8939" for this suite. Jan 27 13:50:35.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:50:35.597: INFO: namespace statefulset-8939 deletion completed in 6.20195575s • [SLOW TEST:101.655 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:50:35.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 27 13:50:35.834: INFO: Waiting up to 5m0s for pod "downward-api-96dd9102-e784-4ff8-9af9-e366de611f22" in namespace "downward-api-4273" to be "success or failure" Jan 27 13:50:35.854: INFO: Pod "downward-api-96dd9102-e784-4ff8-9af9-e366de611f22": Phase="Pending", Reason="", readiness=false. Elapsed: 19.617554ms Jan 27 13:50:37.865: INFO: Pod "downward-api-96dd9102-e784-4ff8-9af9-e366de611f22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030609361s Jan 27 13:50:39.877: INFO: Pod "downward-api-96dd9102-e784-4ff8-9af9-e366de611f22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043009585s Jan 27 13:50:41.923: INFO: Pod "downward-api-96dd9102-e784-4ff8-9af9-e366de611f22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088148244s Jan 27 13:50:43.931: INFO: Pod "downward-api-96dd9102-e784-4ff8-9af9-e366de611f22": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096972957s Jan 27 13:50:45.945: INFO: Pod "downward-api-96dd9102-e784-4ff8-9af9-e366de611f22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111086461s STEP: Saw pod success Jan 27 13:50:45.946: INFO: Pod "downward-api-96dd9102-e784-4ff8-9af9-e366de611f22" satisfied condition "success or failure" Jan 27 13:50:45.953: INFO: Trying to get logs from node iruya-node pod downward-api-96dd9102-e784-4ff8-9af9-e366de611f22 container dapi-container: STEP: delete the pod Jan 27 13:50:46.077: INFO: Waiting for pod downward-api-96dd9102-e784-4ff8-9af9-e366de611f22 to disappear Jan 27 13:50:46.093: INFO: Pod downward-api-96dd9102-e784-4ff8-9af9-e366de611f22 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:50:46.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4273" for this suite. Jan 27 13:50:52.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:50:52.323: INFO: namespace downward-api-4273 deletion completed in 6.15693474s • [SLOW TEST:16.725 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:50:52.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-02b22cee-b3e9-4c8e-9823-34fef46ae7dd STEP: Creating a pod to test consume secrets Jan 27 13:50:52.482: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3c009c85-a257-4458-b8e4-0ff5e16def85" in namespace "projected-1191" to be "success or failure" Jan 27 13:50:52.486: INFO: Pod "pod-projected-secrets-3c009c85-a257-4458-b8e4-0ff5e16def85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29652ms Jan 27 13:50:54.499: INFO: Pod "pod-projected-secrets-3c009c85-a257-4458-b8e4-0ff5e16def85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017677191s Jan 27 13:50:56.513: INFO: Pod "pod-projected-secrets-3c009c85-a257-4458-b8e4-0ff5e16def85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030821328s Jan 27 13:50:58.532: INFO: Pod "pod-projected-secrets-3c009c85-a257-4458-b8e4-0ff5e16def85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049948113s Jan 27 13:51:00.547: INFO: Pod "pod-projected-secrets-3c009c85-a257-4458-b8e4-0ff5e16def85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065341216s STEP: Saw pod success Jan 27 13:51:00.547: INFO: Pod "pod-projected-secrets-3c009c85-a257-4458-b8e4-0ff5e16def85" satisfied condition "success or failure" Jan 27 13:51:00.552: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3c009c85-a257-4458-b8e4-0ff5e16def85 container projected-secret-volume-test: STEP: delete the pod Jan 27 13:51:00.642: INFO: Waiting for pod pod-projected-secrets-3c009c85-a257-4458-b8e4-0ff5e16def85 to disappear Jan 27 13:51:00.676: INFO: Pod pod-projected-secrets-3c009c85-a257-4458-b8e4-0ff5e16def85 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:51:00.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1191" for this suite. Jan 27 13:51:06.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:51:06.841: INFO: namespace projected-1191 deletion completed in 6.155928372s • [SLOW TEST:14.517 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:51:06.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8987.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8987.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8987.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8987.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8987.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8987.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8987.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8987.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8987.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.156.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.156.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.156.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.156.246_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8987.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8987.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8987.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8987.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8987.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8987.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8987.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8987.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8987.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8987.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.156.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.156.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.156.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.156.246_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 27 13:51:19.170: INFO: Unable to read wheezy_udp@dns-test-service.dns-8987.svc.cluster.local from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.180: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8987.svc.cluster.local from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.189: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8987.svc.cluster.local from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.196: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8987.svc.cluster.local from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.203: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-8987.svc.cluster.local from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.210: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-8987.svc.cluster.local from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.216: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.222: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.226: INFO: Unable to read 10.104.156.246_udp@PTR from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.231: INFO: Unable to read 10.104.156.246_tcp@PTR from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.235: INFO: Unable to read jessie_udp@dns-test-service.dns-8987.svc.cluster.local from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.245: INFO: Unable to read jessie_tcp@dns-test-service.dns-8987.svc.cluster.local from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.250: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8987.svc.cluster.local from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.258: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8987.svc.cluster.local from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.280: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-8987.svc.cluster.local from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.302: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-8987.svc.cluster.local from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.308: INFO: Unable to read jessie_udp@PodARecord from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.319: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.324: INFO: Unable to read 10.104.156.246_udp@PTR from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.329: INFO: Unable to read 10.104.156.246_tcp@PTR from pod dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0: the server could not find the requested resource (get pods dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0) Jan 27 13:51:19.329: INFO: Lookups using dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0 failed for: [wheezy_udp@dns-test-service.dns-8987.svc.cluster.local wheezy_tcp@dns-test-service.dns-8987.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8987.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8987.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-8987.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-8987.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.104.156.246_udp@PTR 10.104.156.246_tcp@PTR jessie_udp@dns-test-service.dns-8987.svc.cluster.local jessie_tcp@dns-test-service.dns-8987.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8987.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8987.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-8987.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-8987.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.104.156.246_udp@PTR 10.104.156.246_tcp@PTR] Jan 27 13:51:24.486: INFO: DNS probes using dns-8987/dns-test-93d7082c-f16c-4f2e-82f6-db1194dac0d0 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:51:24.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8987" for this suite. Jan 27 13:51:30.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:51:31.027: INFO: namespace dns-8987 deletion completed in 6.199733665s • [SLOW TEST:24.185 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:51:31.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4030 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 27 13:51:31.142: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 27 13:52:05.385: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-4030 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 27 13:52:05.386: INFO: >>> kubeConfig: /root/.kube/config I0127 13:52:05.474637 8 log.go:172] (0xc0015886e0) (0xc00209c820) Create stream I0127 13:52:05.475029 8 log.go:172] (0xc0015886e0) (0xc00209c820) Stream added, broadcasting: 1 I0127 13:52:05.483239 8 log.go:172] (0xc0015886e0) Reply frame received for 1 I0127 13:52:05.483320 8 log.go:172] (0xc0015886e0) (0xc0018f2a00) Create stream I0127 13:52:05.483340 8 log.go:172] (0xc0015886e0) (0xc0018f2a00) Stream added, broadcasting: 3 I0127 13:52:05.486227 8 log.go:172] (0xc0015886e0) Reply frame received for 3 I0127 13:52:05.486280 8 log.go:172] (0xc0015886e0) (0xc00209c8c0) Create stream I0127 13:52:05.486295 8 log.go:172] (0xc0015886e0) (0xc00209c8c0) Stream added, broadcasting: 5 I0127 13:52:05.488504 8 log.go:172] (0xc0015886e0) Reply frame received for 5 I0127 13:52:05.719200 8 log.go:172] (0xc0015886e0) Data frame received for 3 I0127 13:52:05.719256 8 log.go:172] (0xc0018f2a00) (3) Data frame handling I0127 13:52:05.719272 8 log.go:172] (0xc0018f2a00) (3) Data frame sent I0127 13:52:05.875458 8 log.go:172] (0xc0015886e0) Data frame received for 1 I0127 13:52:05.875744 8 log.go:172] (0xc0015886e0) (0xc00209c8c0) Stream removed, broadcasting: 5 I0127 13:52:05.875881 8 log.go:172] (0xc00209c820) (1) Data frame handling I0127 13:52:05.875965 8 log.go:172] (0xc00209c820) (1) Data frame sent I0127 13:52:05.876049 8 log.go:172] (0xc0015886e0) (0xc0018f2a00) Stream removed, broadcasting: 3 I0127 13:52:05.876205 8 log.go:172] (0xc0015886e0) (0xc00209c820) Stream removed, broadcasting: 1 I0127 13:52:05.876276 8 log.go:172] (0xc0015886e0) Go away received I0127 13:52:05.876597 8 log.go:172] (0xc0015886e0) (0xc00209c820) Stream removed, broadcasting: 1 I0127 13:52:05.876623 8 log.go:172] (0xc0015886e0) (0xc0018f2a00) Stream removed, broadcasting: 3 I0127 13:52:05.876638 8 log.go:172] (0xc0015886e0) (0xc00209c8c0) Stream removed, broadcasting: 5 Jan 27 13:52:05.876: INFO: Waiting for endpoints: map[] Jan 27 13:52:05.886: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-4030 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 27 13:52:05.886: INFO: >>> kubeConfig: /root/.kube/config I0127 13:52:05.944123 8 log.go:172] (0xc000e7cb00) (0xc0023372c0) Create stream I0127 13:52:05.944404 8 log.go:172] (0xc000e7cb00) (0xc0023372c0) Stream added, broadcasting: 1 I0127 13:52:05.953633 8 log.go:172] (0xc000e7cb00) Reply frame received for 1 I0127 13:52:05.953695 8 log.go:172] (0xc000e7cb00) (0xc0018f2aa0) Create stream I0127 13:52:05.953708 8 log.go:172] (0xc000e7cb00) (0xc0018f2aa0) Stream added, broadcasting: 3 I0127 13:52:05.955015 8 log.go:172] (0xc000e7cb00) Reply frame received for 3 I0127 13:52:05.955041 8 log.go:172] (0xc000e7cb00) (0xc00050bae0) Create stream I0127 13:52:05.955053 8 log.go:172] (0xc000e7cb00) (0xc00050bae0) Stream added, broadcasting: 5 I0127 13:52:05.956173 8 log.go:172] (0xc000e7cb00) Reply frame received for 5 I0127 13:52:06.065146 8 log.go:172] (0xc000e7cb00) Data frame received for 3 I0127 13:52:06.065219 8 log.go:172] (0xc0018f2aa0) (3) Data frame handling I0127 13:52:06.065243 8 log.go:172] (0xc0018f2aa0) (3) Data frame sent I0127 13:52:06.207331 8 log.go:172] (0xc000e7cb00) Data frame received for 1 I0127 13:52:06.207438 8 log.go:172] (0xc000e7cb00) (0xc0018f2aa0) Stream removed, broadcasting: 3 I0127 13:52:06.207476 8 log.go:172] (0xc0023372c0) (1) Data frame handling I0127 13:52:06.207498 8 log.go:172] (0xc0023372c0) (1) Data frame sent I0127 13:52:06.207509 8 log.go:172] (0xc000e7cb00) (0xc00050bae0) Stream removed, broadcasting: 5 I0127 13:52:06.207532 8 log.go:172] (0xc000e7cb00) (0xc0023372c0) Stream removed, broadcasting: 1 I0127 13:52:06.207566 8 log.go:172] (0xc000e7cb00) Go away received I0127 13:52:06.207835 8 log.go:172] (0xc000e7cb00) (0xc0023372c0) Stream removed, broadcasting: 1 I0127 13:52:06.207850 8 log.go:172] (0xc000e7cb00) (0xc0018f2aa0) Stream removed, broadcasting: 3 I0127 13:52:06.207857 8 log.go:172] (0xc000e7cb00) (0xc00050bae0) Stream removed, broadcasting: 5 Jan 27 13:52:06.207: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:52:06.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4030" for this suite. Jan 27 13:52:28.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:52:28.411: INFO: namespace pod-network-test-4030 deletion completed in 22.194937814s • [SLOW TEST:57.384 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:52:28.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 27 13:52:28.545: INFO: Waiting up to 5m0s for pod "downward-api-c4ca91b0-d81f-44be-8167-89258d3cfeb0" in namespace "downward-api-7990" to be "success or failure" Jan 27 13:52:28.555: INFO: Pod "downward-api-c4ca91b0-d81f-44be-8167-89258d3cfeb0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.89047ms Jan 27 13:52:30.569: INFO: Pod "downward-api-c4ca91b0-d81f-44be-8167-89258d3cfeb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023194326s Jan 27 13:52:32.585: INFO: Pod "downward-api-c4ca91b0-d81f-44be-8167-89258d3cfeb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039202525s Jan 27 13:52:34.598: INFO: Pod "downward-api-c4ca91b0-d81f-44be-8167-89258d3cfeb0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052413097s Jan 27 13:52:36.615: INFO: Pod "downward-api-c4ca91b0-d81f-44be-8167-89258d3cfeb0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069480737s Jan 27 13:52:38.627: INFO: Pod "downward-api-c4ca91b0-d81f-44be-8167-89258d3cfeb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08130741s STEP: Saw pod success Jan 27 13:52:38.627: INFO: Pod "downward-api-c4ca91b0-d81f-44be-8167-89258d3cfeb0" satisfied condition "success or failure" Jan 27 13:52:38.632: INFO: Trying to get logs from node iruya-node pod downward-api-c4ca91b0-d81f-44be-8167-89258d3cfeb0 container dapi-container: STEP: delete the pod Jan 27 13:52:38.747: INFO: Waiting for pod downward-api-c4ca91b0-d81f-44be-8167-89258d3cfeb0 to disappear Jan 27 13:52:38.759: INFO: Pod downward-api-c4ca91b0-d81f-44be-8167-89258d3cfeb0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:52:38.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7990" for this suite. Jan 27 13:52:44.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:52:45.072: INFO: namespace downward-api-7990 deletion completed in 6.277735027s • [SLOW TEST:16.660 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:52:45.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:52:55.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4905" for this suite. Jan 27 13:53:57.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:53:57.816: INFO: namespace kubelet-test-4905 deletion completed in 1m2.192316776s • [SLOW TEST:72.743 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:53:57.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 27 13:53:57.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-da142042-7f2e-4000-9b2a-75ae715c9d07" in namespace "downward-api-3921" to be "success or failure" Jan 27 13:53:57.937: INFO: Pod "downwardapi-volume-da142042-7f2e-4000-9b2a-75ae715c9d07": Phase="Pending", Reason="", readiness=false. Elapsed: 12.212912ms Jan 27 13:53:59.943: INFO: Pod "downwardapi-volume-da142042-7f2e-4000-9b2a-75ae715c9d07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018582621s Jan 27 13:54:01.951: INFO: Pod "downwardapi-volume-da142042-7f2e-4000-9b2a-75ae715c9d07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026274755s Jan 27 13:54:03.962: INFO: Pod "downwardapi-volume-da142042-7f2e-4000-9b2a-75ae715c9d07": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037437656s Jan 27 13:54:05.975: INFO: Pod "downwardapi-volume-da142042-7f2e-4000-9b2a-75ae715c9d07": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050693332s Jan 27 13:54:07.983: INFO: Pod "downwardapi-volume-da142042-7f2e-4000-9b2a-75ae715c9d07": Phase="Pending", Reason="", readiness=false. Elapsed: 10.058558394s Jan 27 13:54:10.009: INFO: Pod "downwardapi-volume-da142042-7f2e-4000-9b2a-75ae715c9d07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.08480465s STEP: Saw pod success Jan 27 13:54:10.009: INFO: Pod "downwardapi-volume-da142042-7f2e-4000-9b2a-75ae715c9d07" satisfied condition "success or failure" Jan 27 13:54:10.014: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-da142042-7f2e-4000-9b2a-75ae715c9d07 container client-container: STEP: delete the pod Jan 27 13:54:10.073: INFO: Waiting for pod downwardapi-volume-da142042-7f2e-4000-9b2a-75ae715c9d07 to disappear Jan 27 13:54:10.082: INFO: Pod downwardapi-volume-da142042-7f2e-4000-9b2a-75ae715c9d07 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:54:10.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3921" for this suite. Jan 27 13:54:16.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:54:16.513: INFO: namespace downward-api-3921 deletion completed in 6.422530148s • [SLOW TEST:18.697 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:54:16.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 27 13:54:16.631: INFO: Waiting up to 5m0s for pod "pod-3effa8cc-b8c5-4657-b1d8-1fcfbd9d148b" in namespace "emptydir-7997" to be "success or failure" Jan 27 13:54:16.655: INFO: Pod "pod-3effa8cc-b8c5-4657-b1d8-1fcfbd9d148b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.530293ms Jan 27 13:54:18.663: INFO: Pod "pod-3effa8cc-b8c5-4657-b1d8-1fcfbd9d148b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031522307s Jan 27 13:54:20.670: INFO: Pod "pod-3effa8cc-b8c5-4657-b1d8-1fcfbd9d148b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038899894s Jan 27 13:54:22.686: INFO: Pod "pod-3effa8cc-b8c5-4657-b1d8-1fcfbd9d148b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054262441s Jan 27 13:54:24.695: INFO: Pod "pod-3effa8cc-b8c5-4657-b1d8-1fcfbd9d148b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063806277s STEP: Saw pod success Jan 27 13:54:24.695: INFO: Pod "pod-3effa8cc-b8c5-4657-b1d8-1fcfbd9d148b" satisfied condition "success or failure" Jan 27 13:54:24.701: INFO: Trying to get logs from node iruya-node pod pod-3effa8cc-b8c5-4657-b1d8-1fcfbd9d148b container test-container: STEP: delete the pod Jan 27 13:54:24.804: INFO: Waiting for pod pod-3effa8cc-b8c5-4657-b1d8-1fcfbd9d148b to disappear Jan 27 13:54:24.812: INFO: Pod pod-3effa8cc-b8c5-4657-b1d8-1fcfbd9d148b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:54:24.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7997" for this suite. Jan 27 13:54:30.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:54:30.964: INFO: namespace emptydir-7997 deletion completed in 6.146493407s • [SLOW TEST:14.449 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:54:30.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 27 13:54:31.036: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 27 13:54:36.047: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:54:36.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8640" for this suite. Jan 27 13:54:42.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:54:42.528: INFO: namespace replication-controller-8640 deletion completed in 6.272683892s • [SLOW TEST:11.563 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:54:42.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:54:49.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4058" for this suite. Jan 27 13:54:55.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:54:55.401: INFO: namespace namespaces-4058 deletion completed in 6.138493286s STEP: Destroying namespace "nsdeletetest-2061" for this suite. Jan 27 13:54:55.404: INFO: Namespace nsdeletetest-2061 was already deleted STEP: Destroying namespace "nsdeletetest-3970" for this suite. Jan 27 13:55:01.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:55:01.558: INFO: namespace nsdeletetest-3970 deletion completed in 6.154722561s • [SLOW TEST:19.029 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:55:01.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5163.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5163.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5163.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5163.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5163.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5163.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 27 13:55:15.892: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5163/dns-test-cc25f1bb-d385-4c7f-a0ca-ddadd0c5e5bc: the server could not find the requested resource (get pods dns-test-cc25f1bb-d385-4c7f-a0ca-ddadd0c5e5bc) Jan 27 13:55:15.897: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5163/dns-test-cc25f1bb-d385-4c7f-a0ca-ddadd0c5e5bc: the server could not find the requested resource (get pods dns-test-cc25f1bb-d385-4c7f-a0ca-ddadd0c5e5bc) Jan 27 13:55:15.905: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-5163.svc.cluster.local from pod dns-5163/dns-test-cc25f1bb-d385-4c7f-a0ca-ddadd0c5e5bc: the server could not find the requested resource (get pods dns-test-cc25f1bb-d385-4c7f-a0ca-ddadd0c5e5bc) Jan 27 13:55:15.909: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-5163/dns-test-cc25f1bb-d385-4c7f-a0ca-ddadd0c5e5bc: the server could not find the requested resource (get pods dns-test-cc25f1bb-d385-4c7f-a0ca-ddadd0c5e5bc) Jan 27 13:55:15.914: INFO: Unable to read jessie_udp@PodARecord from pod dns-5163/dns-test-cc25f1bb-d385-4c7f-a0ca-ddadd0c5e5bc: the server could not find the requested resource (get pods dns-test-cc25f1bb-d385-4c7f-a0ca-ddadd0c5e5bc) Jan 27 13:55:15.917: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5163/dns-test-cc25f1bb-d385-4c7f-a0ca-ddadd0c5e5bc: the server could not find the requested resource (get pods dns-test-cc25f1bb-d385-4c7f-a0ca-ddadd0c5e5bc) Jan 27 13:55:15.917: INFO: Lookups using dns-5163/dns-test-cc25f1bb-d385-4c7f-a0ca-ddadd0c5e5bc failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-5163.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 27 13:55:20.988: INFO: DNS probes using dns-5163/dns-test-cc25f1bb-d385-4c7f-a0ca-ddadd0c5e5bc succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:55:21.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5163" for this suite. Jan 27 13:55:27.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:55:27.319: INFO: namespace dns-5163 deletion completed in 6.155269452s • [SLOW TEST:25.759 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:55:27.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jan 27 13:55:27.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4036' Jan 27 13:55:27.826: INFO: stderr: "" Jan 27 13:55:27.826: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 27 13:55:28.839: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:55:28.839: INFO: Found 0 / 1 Jan 27 13:55:29.838: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:55:29.839: INFO: Found 0 / 1 Jan 27 13:55:30.837: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:55:30.837: INFO: Found 0 / 1 Jan 27 13:55:31.834: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:55:31.834: INFO: Found 0 / 1 Jan 27 13:55:32.835: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:55:32.835: INFO: Found 0 / 1 Jan 27 13:55:33.838: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:55:33.838: INFO: Found 0 / 1 Jan 27 13:55:34.859: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:55:34.859: INFO: Found 0 / 1 Jan 27 13:55:35.837: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:55:35.837: INFO: Found 0 / 1 Jan 27 13:55:36.835: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:55:36.835: INFO: Found 1 / 1 Jan 27 13:55:36.835: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 27 13:55:36.839: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:55:36.839: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 27 13:55:36.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-kk29r --namespace=kubectl-4036 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 27 13:55:36.975: INFO: stderr: "" Jan 27 13:55:36.975: INFO: stdout: "pod/redis-master-kk29r patched\n" STEP: checking annotations Jan 27 13:55:36.983: INFO: Selector matched 1 pods for map[app:redis] Jan 27 13:55:36.983: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:55:36.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4036" for this suite. Jan 27 13:55:59.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:55:59.102: INFO: namespace kubectl-4036 deletion completed in 22.11452259s • [SLOW TEST:31.782 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:55:59.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Jan 27 13:55:59.201: INFO: Waiting up to 5m0s for pod "var-expansion-c2521d3f-da7c-47f6-964d-3606c58a4cce" in namespace "var-expansion-7446" to be "success or failure" Jan 27 13:55:59.284: INFO: Pod "var-expansion-c2521d3f-da7c-47f6-964d-3606c58a4cce": Phase="Pending", Reason="", readiness=false. Elapsed: 82.682578ms Jan 27 13:56:01.292: INFO: Pod "var-expansion-c2521d3f-da7c-47f6-964d-3606c58a4cce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090745764s Jan 27 13:56:03.305: INFO: Pod "var-expansion-c2521d3f-da7c-47f6-964d-3606c58a4cce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103258944s Jan 27 13:56:05.313: INFO: Pod "var-expansion-c2521d3f-da7c-47f6-964d-3606c58a4cce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111467103s Jan 27 13:56:07.325: INFO: Pod "var-expansion-c2521d3f-da7c-47f6-964d-3606c58a4cce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123635461s Jan 27 13:56:09.333: INFO: Pod "var-expansion-c2521d3f-da7c-47f6-964d-3606c58a4cce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.131141421s STEP: Saw pod success Jan 27 13:56:09.333: INFO: Pod "var-expansion-c2521d3f-da7c-47f6-964d-3606c58a4cce" satisfied condition "success or failure" Jan 27 13:56:09.336: INFO: Trying to get logs from node iruya-node pod var-expansion-c2521d3f-da7c-47f6-964d-3606c58a4cce container dapi-container: STEP: delete the pod Jan 27 13:56:09.427: INFO: Waiting for pod var-expansion-c2521d3f-da7c-47f6-964d-3606c58a4cce to disappear Jan 27 13:56:09.435: INFO: Pod var-expansion-c2521d3f-da7c-47f6-964d-3606c58a4cce no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:56:09.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7446" for this suite. Jan 27 13:56:15.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:56:15.577: INFO: namespace var-expansion-7446 deletion completed in 6.136490607s • [SLOW TEST:16.475 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:56:15.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-291e7fee-bbe3-46b0-82e0-9a225f6f528f STEP: Creating a pod to test consume secrets Jan 27 13:56:16.012: INFO: Waiting up to 5m0s for pod "pod-secrets-36b52635-1104-447f-8f68-8268da7d6029" in namespace "secrets-6557" to be "success or failure" Jan 27 13:56:16.019: INFO: Pod "pod-secrets-36b52635-1104-447f-8f68-8268da7d6029": Phase="Pending", Reason="", readiness=false. Elapsed: 6.799818ms Jan 27 13:56:18.063: INFO: Pod "pod-secrets-36b52635-1104-447f-8f68-8268da7d6029": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051140809s Jan 27 13:56:20.071: INFO: Pod "pod-secrets-36b52635-1104-447f-8f68-8268da7d6029": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059318911s Jan 27 13:56:22.082: INFO: Pod "pod-secrets-36b52635-1104-447f-8f68-8268da7d6029": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069503971s Jan 27 13:56:24.094: INFO: Pod "pod-secrets-36b52635-1104-447f-8f68-8268da7d6029": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081497937s STEP: Saw pod success Jan 27 13:56:24.094: INFO: Pod "pod-secrets-36b52635-1104-447f-8f68-8268da7d6029" satisfied condition "success or failure" Jan 27 13:56:24.103: INFO: Trying to get logs from node iruya-node pod pod-secrets-36b52635-1104-447f-8f68-8268da7d6029 container secret-volume-test: STEP: delete the pod Jan 27 13:56:24.152: INFO: Waiting for pod pod-secrets-36b52635-1104-447f-8f68-8268da7d6029 to disappear Jan 27 13:56:24.156: INFO: Pod pod-secrets-36b52635-1104-447f-8f68-8268da7d6029 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:56:24.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6557" for this suite. Jan 27 13:56:30.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:56:30.384: INFO: namespace secrets-6557 deletion completed in 6.223434159s STEP: Destroying namespace "secret-namespace-1174" for this suite. Jan 27 13:56:36.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:56:36.540: INFO: namespace secret-namespace-1174 deletion completed in 6.154941987s • [SLOW TEST:20.962 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:56:36.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 27 13:56:46.928: INFO: Waiting up to 5m0s for pod "client-envvars-040c9a70-c8d4-48aa-afb4-870b5c779183" in namespace "pods-6069" to be "success or failure" Jan 27 13:56:46.949: INFO: Pod "client-envvars-040c9a70-c8d4-48aa-afb4-870b5c779183": Phase="Pending", Reason="", readiness=false. Elapsed: 20.43653ms Jan 27 13:56:48.967: INFO: Pod "client-envvars-040c9a70-c8d4-48aa-afb4-870b5c779183": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03798933s Jan 27 13:56:50.973: INFO: Pod "client-envvars-040c9a70-c8d4-48aa-afb4-870b5c779183": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044275024s Jan 27 13:56:52.983: INFO: Pod "client-envvars-040c9a70-c8d4-48aa-afb4-870b5c779183": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054332353s Jan 27 13:56:54.995: INFO: Pod "client-envvars-040c9a70-c8d4-48aa-afb4-870b5c779183": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066609602s Jan 27 13:56:57.004: INFO: Pod "client-envvars-040c9a70-c8d4-48aa-afb4-870b5c779183": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075543061s STEP: Saw pod success Jan 27 13:56:57.004: INFO: Pod "client-envvars-040c9a70-c8d4-48aa-afb4-870b5c779183" satisfied condition "success or failure" Jan 27 13:56:57.009: INFO: Trying to get logs from node iruya-node pod client-envvars-040c9a70-c8d4-48aa-afb4-870b5c779183 container env3cont: STEP: delete the pod Jan 27 13:56:57.082: INFO: Waiting for pod client-envvars-040c9a70-c8d4-48aa-afb4-870b5c779183 to disappear Jan 27 13:56:57.087: INFO: Pod client-envvars-040c9a70-c8d4-48aa-afb4-870b5c779183 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:56:57.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6069" for this suite. Jan 27 13:57:49.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:57:49.302: INFO: namespace pods-6069 deletion completed in 52.209731868s • [SLOW TEST:72.762 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:57:49.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:57:49.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2527" for this suite. Jan 27 13:58:11.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:58:11.783: INFO: namespace pods-2527 deletion completed in 22.302661812s • [SLOW TEST:22.477 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:58:11.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jan 27 13:58:11.938: INFO: Waiting up to 5m0s for pod "var-expansion-896956af-62ef-4853-84fe-6f8d1e516e67" in namespace "var-expansion-8003" to be "success or failure" Jan 27 13:58:11.969: INFO: Pod "var-expansion-896956af-62ef-4853-84fe-6f8d1e516e67": Phase="Pending", Reason="", readiness=false. Elapsed: 30.224621ms Jan 27 13:58:13.978: INFO: Pod "var-expansion-896956af-62ef-4853-84fe-6f8d1e516e67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039291398s Jan 27 13:58:15.983: INFO: Pod "var-expansion-896956af-62ef-4853-84fe-6f8d1e516e67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045055579s Jan 27 13:58:17.991: INFO: Pod "var-expansion-896956af-62ef-4853-84fe-6f8d1e516e67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052632833s Jan 27 13:58:20.034: INFO: Pod "var-expansion-896956af-62ef-4853-84fe-6f8d1e516e67": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09598758s Jan 27 13:58:22.053: INFO: Pod "var-expansion-896956af-62ef-4853-84fe-6f8d1e516e67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114365399s STEP: Saw pod success Jan 27 13:58:22.053: INFO: Pod "var-expansion-896956af-62ef-4853-84fe-6f8d1e516e67" satisfied condition "success or failure" Jan 27 13:58:22.056: INFO: Trying to get logs from node iruya-node pod var-expansion-896956af-62ef-4853-84fe-6f8d1e516e67 container dapi-container: STEP: delete the pod Jan 27 13:58:22.601: INFO: Waiting for pod var-expansion-896956af-62ef-4853-84fe-6f8d1e516e67 to disappear Jan 27 13:58:22.609: INFO: Pod var-expansion-896956af-62ef-4853-84fe-6f8d1e516e67 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:58:22.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8003" for this suite. Jan 27 13:58:28.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:58:28.814: INFO: namespace var-expansion-8003 deletion completed in 6.196271171s • [SLOW TEST:17.029 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:58:28.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 27 13:58:28.952: INFO: Waiting up to 5m0s for pod "downward-api-a081341c-8775-415b-bf0e-e9bfc11164b3" in namespace "downward-api-7568" to be "success or failure" Jan 27 13:58:28.961: INFO: Pod "downward-api-a081341c-8775-415b-bf0e-e9bfc11164b3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.082185ms Jan 27 13:58:30.973: INFO: Pod "downward-api-a081341c-8775-415b-bf0e-e9bfc11164b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021759685s Jan 27 13:58:32.982: INFO: Pod "downward-api-a081341c-8775-415b-bf0e-e9bfc11164b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029884728s Jan 27 13:58:34.989: INFO: Pod "downward-api-a081341c-8775-415b-bf0e-e9bfc11164b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037319221s Jan 27 13:58:36.996: INFO: Pod "downward-api-a081341c-8775-415b-bf0e-e9bfc11164b3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044491961s Jan 27 13:58:39.006: INFO: Pod "downward-api-a081341c-8775-415b-bf0e-e9bfc11164b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054047305s STEP: Saw pod success Jan 27 13:58:39.006: INFO: Pod "downward-api-a081341c-8775-415b-bf0e-e9bfc11164b3" satisfied condition "success or failure" Jan 27 13:58:39.009: INFO: Trying to get logs from node iruya-node pod downward-api-a081341c-8775-415b-bf0e-e9bfc11164b3 container dapi-container: STEP: delete the pod Jan 27 13:58:39.115: INFO: Waiting for pod downward-api-a081341c-8775-415b-bf0e-e9bfc11164b3 to disappear Jan 27 13:58:39.127: INFO: Pod downward-api-a081341c-8775-415b-bf0e-e9bfc11164b3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:58:39.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7568" for this suite. Jan 27 13:58:45.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:58:45.321: INFO: namespace downward-api-7568 deletion completed in 6.184857177s • [SLOW TEST:16.506 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:58:45.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 27 13:58:45.446: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb670a14-9071-4860-8527-8f4b6d51efae" in namespace "projected-7711" to be "success or failure" Jan 27 13:58:45.453: INFO: Pod "downwardapi-volume-bb670a14-9071-4860-8527-8f4b6d51efae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210622ms Jan 27 13:58:47.473: INFO: Pod "downwardapi-volume-bb670a14-9071-4860-8527-8f4b6d51efae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026041509s Jan 27 13:58:49.483: INFO: Pod "downwardapi-volume-bb670a14-9071-4860-8527-8f4b6d51efae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036077187s Jan 27 13:58:51.492: INFO: Pod "downwardapi-volume-bb670a14-9071-4860-8527-8f4b6d51efae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045920856s Jan 27 13:58:53.502: INFO: Pod "downwardapi-volume-bb670a14-9071-4860-8527-8f4b6d51efae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055780578s Jan 27 13:58:55.513: INFO: Pod "downwardapi-volume-bb670a14-9071-4860-8527-8f4b6d51efae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065966063s STEP: Saw pod success Jan 27 13:58:55.513: INFO: Pod "downwardapi-volume-bb670a14-9071-4860-8527-8f4b6d51efae" satisfied condition "success or failure" Jan 27 13:58:55.517: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bb670a14-9071-4860-8527-8f4b6d51efae container client-container: STEP: delete the pod Jan 27 13:58:55.591: INFO: Waiting for pod downwardapi-volume-bb670a14-9071-4860-8527-8f4b6d51efae to disappear Jan 27 13:58:55.598: INFO: Pod downwardapi-volume-bb670a14-9071-4860-8527-8f4b6d51efae no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:58:55.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7711" for this suite. Jan 27 13:59:01.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:59:01.775: INFO: namespace projected-7711 deletion completed in 6.167379702s • [SLOW TEST:16.453 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:59:01.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 27 13:59:18.062: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 27 13:59:18.075: INFO: Pod pod-with-prestop-http-hook still exists Jan 27 13:59:20.075: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 27 13:59:20.090: INFO: Pod pod-with-prestop-http-hook still exists Jan 27 13:59:22.075: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 27 13:59:22.103: INFO: Pod pod-with-prestop-http-hook still exists Jan 27 13:59:24.075: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 27 13:59:24.107: INFO: Pod pod-with-prestop-http-hook still exists Jan 27 13:59:26.077: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 27 13:59:26.085: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:59:26.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4524" for this suite. Jan 27 13:59:48.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 13:59:48.265: INFO: namespace container-lifecycle-hook-4524 deletion completed in 22.148367515s • [SLOW TEST:46.489 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 13:59:48.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 27 13:59:48.388: INFO: Waiting up to 5m0s for pod "pod-02c62fb3-8cdb-4b2d-a6bf-84198c96c1d6" in namespace "emptydir-5361" to be "success or failure" Jan 27 13:59:48.411: INFO: Pod "pod-02c62fb3-8cdb-4b2d-a6bf-84198c96c1d6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.986641ms Jan 27 13:59:50.420: INFO: Pod "pod-02c62fb3-8cdb-4b2d-a6bf-84198c96c1d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031846643s Jan 27 13:59:52.430: INFO: Pod "pod-02c62fb3-8cdb-4b2d-a6bf-84198c96c1d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04127632s Jan 27 13:59:54.447: INFO: Pod "pod-02c62fb3-8cdb-4b2d-a6bf-84198c96c1d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059011029s Jan 27 13:59:56.472: INFO: Pod "pod-02c62fb3-8cdb-4b2d-a6bf-84198c96c1d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083192472s Jan 27 13:59:58.486: INFO: Pod "pod-02c62fb3-8cdb-4b2d-a6bf-84198c96c1d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097071397s STEP: Saw pod success Jan 27 13:59:58.486: INFO: Pod "pod-02c62fb3-8cdb-4b2d-a6bf-84198c96c1d6" satisfied condition "success or failure" Jan 27 13:59:58.493: INFO: Trying to get logs from node iruya-node pod pod-02c62fb3-8cdb-4b2d-a6bf-84198c96c1d6 container test-container: STEP: delete the pod Jan 27 13:59:58.764: INFO: Waiting for pod pod-02c62fb3-8cdb-4b2d-a6bf-84198c96c1d6 to disappear Jan 27 13:59:58.795: INFO: Pod pod-02c62fb3-8cdb-4b2d-a6bf-84198c96c1d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 13:59:58.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5361" for this suite. Jan 27 14:00:04.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:00:05.060: INFO: namespace emptydir-5361 deletion completed in 6.251917964s • [SLOW TEST:16.794 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:00:05.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-656e07a2-328d-4d10-bdff-8ec027d48e16 STEP: Creating a pod to test consume secrets Jan 27 14:00:05.244: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0a0d4319-7981-459b-84a3-48630721808b" in namespace "projected-649" to be "success or failure" Jan 27 14:00:05.275: INFO: Pod "pod-projected-secrets-0a0d4319-7981-459b-84a3-48630721808b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.916878ms Jan 27 14:00:07.284: INFO: Pod "pod-projected-secrets-0a0d4319-7981-459b-84a3-48630721808b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039731343s Jan 27 14:00:09.291: INFO: Pod "pod-projected-secrets-0a0d4319-7981-459b-84a3-48630721808b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047018594s Jan 27 14:00:11.305: INFO: Pod "pod-projected-secrets-0a0d4319-7981-459b-84a3-48630721808b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06046482s Jan 27 14:00:13.333: INFO: Pod "pod-projected-secrets-0a0d4319-7981-459b-84a3-48630721808b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08907734s STEP: Saw pod success Jan 27 14:00:13.333: INFO: Pod "pod-projected-secrets-0a0d4319-7981-459b-84a3-48630721808b" satisfied condition "success or failure" Jan 27 14:00:13.337: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-0a0d4319-7981-459b-84a3-48630721808b container projected-secret-volume-test: STEP: delete the pod Jan 27 14:00:13.434: INFO: Waiting for pod pod-projected-secrets-0a0d4319-7981-459b-84a3-48630721808b to disappear Jan 27 14:00:13.466: INFO: Pod pod-projected-secrets-0a0d4319-7981-459b-84a3-48630721808b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:00:13.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-649" for this suite. Jan 27 14:00:19.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:00:19.638: INFO: namespace projected-649 deletion completed in 6.165162944s • [SLOW TEST:14.577 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:00:19.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 27 14:00:19.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-862' Jan 27 14:00:21.741: INFO: stderr: "" Jan 27 14:00:21.741: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jan 27 14:00:21.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-862' Jan 27 14:00:27.087: INFO: stderr: "" Jan 27 14:00:27.087: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:00:27.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-862" for this suite. Jan 27 14:00:33.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:00:33.415: INFO: namespace kubectl-862 deletion completed in 6.315349963s • [SLOW TEST:13.777 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:00:33.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-9b8a0e51-79ac-47aa-8502-9e34c7dee8f8 STEP: Creating a pod to test consume configMaps Jan 27 14:00:33.608: INFO: Waiting up to 5m0s for pod "pod-configmaps-12cbde27-f76f-45fb-aa6e-78071abb42a6" in namespace "configmap-4839" to be "success or failure" Jan 27 14:00:33.641: INFO: Pod "pod-configmaps-12cbde27-f76f-45fb-aa6e-78071abb42a6": Phase="Pending", Reason="", readiness=false. Elapsed: 32.323961ms Jan 27 14:00:35.649: INFO: Pod "pod-configmaps-12cbde27-f76f-45fb-aa6e-78071abb42a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040809644s Jan 27 14:00:37.656: INFO: Pod "pod-configmaps-12cbde27-f76f-45fb-aa6e-78071abb42a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047660049s Jan 27 14:00:39.673: INFO: Pod "pod-configmaps-12cbde27-f76f-45fb-aa6e-78071abb42a6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064698052s Jan 27 14:00:41.687: INFO: Pod "pod-configmaps-12cbde27-f76f-45fb-aa6e-78071abb42a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078350275s STEP: Saw pod success Jan 27 14:00:41.687: INFO: Pod "pod-configmaps-12cbde27-f76f-45fb-aa6e-78071abb42a6" satisfied condition "success or failure" Jan 27 14:00:41.694: INFO: Trying to get logs from node iruya-node pod pod-configmaps-12cbde27-f76f-45fb-aa6e-78071abb42a6 container configmap-volume-test: STEP: delete the pod Jan 27 14:00:41.750: INFO: Waiting for pod pod-configmaps-12cbde27-f76f-45fb-aa6e-78071abb42a6 to disappear Jan 27 14:00:41.904: INFO: Pod pod-configmaps-12cbde27-f76f-45fb-aa6e-78071abb42a6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:00:41.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4839" for this suite. Jan 27 14:00:47.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:00:48.088: INFO: namespace configmap-4839 deletion completed in 6.174242335s • [SLOW TEST:14.672 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:00:48.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 27 14:00:48.200: INFO: Creating ReplicaSet my-hostname-basic-9577cff7-1796-4ebd-ad5f-27a2ff0927e7 Jan 27 14:00:48.211: INFO: Pod name my-hostname-basic-9577cff7-1796-4ebd-ad5f-27a2ff0927e7: Found 0 pods out of 1 Jan 27 14:00:53.221: INFO: Pod name my-hostname-basic-9577cff7-1796-4ebd-ad5f-27a2ff0927e7: Found 1 pods out of 1 Jan 27 14:00:53.221: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-9577cff7-1796-4ebd-ad5f-27a2ff0927e7" is running Jan 27 14:00:57.239: INFO: Pod "my-hostname-basic-9577cff7-1796-4ebd-ad5f-27a2ff0927e7-dd48b" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 14:00:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 14:00:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9577cff7-1796-4ebd-ad5f-27a2ff0927e7]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 14:00:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9577cff7-1796-4ebd-ad5f-27a2ff0927e7]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 14:00:48 +0000 UTC Reason: Message:}]) Jan 27 14:00:57.239: INFO: Trying to dial the pod Jan 27 14:01:02.280: INFO: Controller my-hostname-basic-9577cff7-1796-4ebd-ad5f-27a2ff0927e7: Got expected result from replica 1 [my-hostname-basic-9577cff7-1796-4ebd-ad5f-27a2ff0927e7-dd48b]: "my-hostname-basic-9577cff7-1796-4ebd-ad5f-27a2ff0927e7-dd48b", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:01:02.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9330" for this suite. Jan 27 14:01:08.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:01:08.429: INFO: namespace replicaset-9330 deletion completed in 6.141702446s • [SLOW TEST:20.340 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:01:08.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 27 14:01:08.530: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-429,SelfLink:/api/v1/namespaces/watch-429/configmaps/e2e-watch-test-label-changed,UID:e7d2e41a-6397-460d-a48c-4a324188c822,ResourceVersion:22068859,Generation:0,CreationTimestamp:2020-01-27 14:01:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 27 14:01:08.530: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-429,SelfLink:/api/v1/namespaces/watch-429/configmaps/e2e-watch-test-label-changed,UID:e7d2e41a-6397-460d-a48c-4a324188c822,ResourceVersion:22068860,Generation:0,CreationTimestamp:2020-01-27 14:01:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 27 14:01:08.530: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-429,SelfLink:/api/v1/namespaces/watch-429/configmaps/e2e-watch-test-label-changed,UID:e7d2e41a-6397-460d-a48c-4a324188c822,ResourceVersion:22068861,Generation:0,CreationTimestamp:2020-01-27 14:01:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 27 14:01:18.600: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-429,SelfLink:/api/v1/namespaces/watch-429/configmaps/e2e-watch-test-label-changed,UID:e7d2e41a-6397-460d-a48c-4a324188c822,ResourceVersion:22068878,Generation:0,CreationTimestamp:2020-01-27 14:01:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 27 14:01:18.600: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-429,SelfLink:/api/v1/namespaces/watch-429/configmaps/e2e-watch-test-label-changed,UID:e7d2e41a-6397-460d-a48c-4a324188c822,ResourceVersion:22068879,Generation:0,CreationTimestamp:2020-01-27 14:01:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 27 14:01:18.601: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-429,SelfLink:/api/v1/namespaces/watch-429/configmaps/e2e-watch-test-label-changed,UID:e7d2e41a-6397-460d-a48c-4a324188c822,ResourceVersion:22068880,Generation:0,CreationTimestamp:2020-01-27 14:01:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:01:18.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-429" for this suite. Jan 27 14:01:24.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:01:24.773: INFO: namespace watch-429 deletion completed in 6.148832965s • [SLOW TEST:16.342 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:01:24.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-1b4db0fb-ae9d-4faa-a7a1-42cbf2d5dd58 STEP: Creating a pod to test consume secrets Jan 27 14:01:24.917: INFO: Waiting up to 5m0s for pod "pod-secrets-68718bc7-c335-476d-8204-339849001096" in namespace "secrets-811" to be "success or failure" Jan 27 14:01:24.925: INFO: Pod "pod-secrets-68718bc7-c335-476d-8204-339849001096": Phase="Pending", Reason="", readiness=false. Elapsed: 7.966793ms Jan 27 14:01:26.939: INFO: Pod "pod-secrets-68718bc7-c335-476d-8204-339849001096": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021789151s Jan 27 14:01:29.697: INFO: Pod "pod-secrets-68718bc7-c335-476d-8204-339849001096": Phase="Pending", Reason="", readiness=false. Elapsed: 4.779377651s Jan 27 14:01:31.708: INFO: Pod "pod-secrets-68718bc7-c335-476d-8204-339849001096": Phase="Pending", Reason="", readiness=false. Elapsed: 6.79045052s Jan 27 14:01:33.722: INFO: Pod "pod-secrets-68718bc7-c335-476d-8204-339849001096": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.804779904s STEP: Saw pod success Jan 27 14:01:33.722: INFO: Pod "pod-secrets-68718bc7-c335-476d-8204-339849001096" satisfied condition "success or failure" Jan 27 14:01:33.727: INFO: Trying to get logs from node iruya-node pod pod-secrets-68718bc7-c335-476d-8204-339849001096 container secret-env-test: STEP: delete the pod Jan 27 14:01:33.820: INFO: Waiting for pod pod-secrets-68718bc7-c335-476d-8204-339849001096 to disappear Jan 27 14:01:33.967: INFO: Pod pod-secrets-68718bc7-c335-476d-8204-339849001096 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:01:33.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-811" for this suite. Jan 27 14:01:39.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:01:40.142: INFO: namespace secrets-811 deletion completed in 6.167553528s • [SLOW TEST:15.369 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:01:40.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 27 14:01:40.287: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 27 14:01:40.305: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 27 14:01:45.314: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 27 14:01:49.324: INFO: Creating deployment "test-rolling-update-deployment" Jan 27 14:01:49.330: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 27 14:01:49.366: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 27 14:01:51.379: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 27 14:01:51.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 14:01:53.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 14:01:55.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 14:01:57.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715730509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 14:01:59.390: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 27 14:01:59.404: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-1834,SelfLink:/apis/apps/v1/namespaces/deployment-1834/deployments/test-rolling-update-deployment,UID:bc1596c3-2ed7-4c97-bdb8-c9cc4b448612,ResourceVersion:22069006,Generation:1,CreationTimestamp:2020-01-27 14:01:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-27 14:01:49 +0000 UTC 2020-01-27 14:01:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-27 14:01:57 +0000 UTC 2020-01-27 14:01:49 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 27 14:01:59.409: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-1834,SelfLink:/apis/apps/v1/namespaces/deployment-1834/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:f02d5cc7-8643-4921-8639-fbe41797a06f,ResourceVersion:22068996,Generation:1,CreationTimestamp:2020-01-27 14:01:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bc1596c3-2ed7-4c97-bdb8-c9cc4b448612 0xc003128dc7 0xc003128dc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 27 14:01:59.409: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 27 14:01:59.409: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-1834,SelfLink:/apis/apps/v1/namespaces/deployment-1834/replicasets/test-rolling-update-controller,UID:f0f579e1-eecd-4962-9dca-42610089c6ad,ResourceVersion:22069005,Generation:2,CreationTimestamp:2020-01-27 14:01:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bc1596c3-2ed7-4c97-bdb8-c9cc4b448612 0xc003128cf7 0xc003128cf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 27 14:01:59.413: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-cdr5t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-cdr5t,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-1834,SelfLink:/api/v1/namespaces/deployment-1834/pods/test-rolling-update-deployment-79f6b9d75c-cdr5t,UID:17dc8f02-1480-4ae1-b0e4-bc65c9de03a3,ResourceVersion:22068995,Generation:0,CreationTimestamp:2020-01-27 14:01:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c f02d5cc7-8643-4921-8639-fbe41797a06f 0xc0031e94c7 0xc0031e94c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k2zm7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k2zm7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-k2zm7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031e9830} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031e9850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:01:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:01:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:01:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:01:49 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-27 14:01:49 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-27 14:01:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d25c016e25a148b9449d46f894a6a47d018a0120dcc6d2e5bc7640713ae79692}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:01:59.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1834" for this suite. Jan 27 14:02:05.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:02:05.552: INFO: namespace deployment-1834 deletion completed in 6.13364455s • [SLOW TEST:25.409 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:02:05.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 27 14:02:05.621: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 27 14:02:05.634: INFO: Waiting for terminating namespaces to be deleted... Jan 27 14:02:05.638: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 27 14:02:05.647: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 27 14:02:05.647: INFO: Container kube-proxy ready: true, restart count 0 Jan 27 14:02:05.647: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 27 14:02:05.647: INFO: Container weave ready: true, restart count 0 Jan 27 14:02:05.647: INFO: Container weave-npc ready: true, restart count 0 Jan 27 14:02:05.647: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 27 14:02:05.734: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 27 14:02:05.734: INFO: Container etcd ready: true, restart count 0 Jan 27 14:02:05.734: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 27 14:02:05.734: INFO: Container weave ready: true, restart count 0 Jan 27 14:02:05.734: INFO: Container weave-npc ready: true, restart count 0 Jan 27 14:02:05.734: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 27 14:02:05.734: INFO: Container coredns ready: true, restart count 0 Jan 27 14:02:05.734: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 27 14:02:05.734: INFO: Container kube-controller-manager ready: true, restart count 19 Jan 27 14:02:05.734: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 27 14:02:05.734: INFO: Container kube-proxy ready: true, restart count 0 Jan 27 14:02:05.734: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 27 14:02:05.734: INFO: Container kube-apiserver ready: true, restart count 0 Jan 27 14:02:05.734: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 27 14:02:05.734: INFO: Container kube-scheduler ready: true, restart count 13 Jan 27 14:02:05.734: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 27 14:02:05.734: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-55b4eecf-ccbd-487b-aedf-4d7df506bcfc 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-55b4eecf-ccbd-487b-aedf-4d7df506bcfc off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-55b4eecf-ccbd-487b-aedf-4d7df506bcfc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:02:26.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2901" for this suite. Jan 27 14:02:40.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:02:40.347: INFO: namespace sched-pred-2901 deletion completed in 14.190982595s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:34.794 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:02:40.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Jan 27 14:02:40.456: INFO: Waiting up to 5m0s for pod "var-expansion-886bcd02-1c81-4c6e-ab37-a23fbc74b851" in namespace "var-expansion-9469" to be "success or failure" Jan 27 14:02:40.474: INFO: Pod "var-expansion-886bcd02-1c81-4c6e-ab37-a23fbc74b851": Phase="Pending", Reason="", readiness=false. Elapsed: 17.817613ms Jan 27 14:02:42.499: INFO: Pod "var-expansion-886bcd02-1c81-4c6e-ab37-a23fbc74b851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042247187s Jan 27 14:02:44.511: INFO: Pod "var-expansion-886bcd02-1c81-4c6e-ab37-a23fbc74b851": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054997681s Jan 27 14:02:46.525: INFO: Pod "var-expansion-886bcd02-1c81-4c6e-ab37-a23fbc74b851": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068386085s Jan 27 14:02:48.543: INFO: Pod "var-expansion-886bcd02-1c81-4c6e-ab37-a23fbc74b851": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086308818s Jan 27 14:02:50.554: INFO: Pod "var-expansion-886bcd02-1c81-4c6e-ab37-a23fbc74b851": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097741573s STEP: Saw pod success Jan 27 14:02:50.554: INFO: Pod "var-expansion-886bcd02-1c81-4c6e-ab37-a23fbc74b851" satisfied condition "success or failure" Jan 27 14:02:50.559: INFO: Trying to get logs from node iruya-node pod var-expansion-886bcd02-1c81-4c6e-ab37-a23fbc74b851 container dapi-container: STEP: delete the pod Jan 27 14:02:50.672: INFO: Waiting for pod var-expansion-886bcd02-1c81-4c6e-ab37-a23fbc74b851 to disappear Jan 27 14:02:50.718: INFO: Pod var-expansion-886bcd02-1c81-4c6e-ab37-a23fbc74b851 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:02:50.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9469" for this suite. Jan 27 14:02:56.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:02:56.881: INFO: namespace var-expansion-9469 deletion completed in 6.152939365s • [SLOW TEST:16.534 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:02:56.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:03:05.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2280" for this suite. Jan 27 14:03:49.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:03:49.296: INFO: namespace kubelet-test-2280 deletion completed in 44.172989028s • [SLOW TEST:52.415 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:03:49.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-694.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-694.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 27 14:04:03.479: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-694/dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b: the server could not find the requested resource (get pods dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b) Jan 27 14:04:03.495: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-694/dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b: the server could not find the requested resource (get pods dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b) Jan 27 14:04:03.505: INFO: Unable to read wheezy_udp@PodARecord from pod dns-694/dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b: the server could not find the requested resource (get pods dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b) Jan 27 14:04:03.511: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-694/dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b: the server could not find the requested resource (get pods dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b) Jan 27 14:04:03.517: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-694/dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b: the server could not find the requested resource (get pods dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b) Jan 27 14:04:03.524: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-694/dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b: the server could not find the requested resource (get pods dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b) Jan 27 14:04:03.531: INFO: Unable to read jessie_udp@PodARecord from pod dns-694/dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b: the server could not find the requested resource (get pods dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b) Jan 27 14:04:03.538: INFO: Unable to read jessie_tcp@PodARecord from pod dns-694/dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b: the server could not find the requested resource (get pods dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b) Jan 27 14:04:03.538: INFO: Lookups using dns-694/dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 27 14:04:08.779: INFO: DNS probes using dns-694/dns-test-7773b278-3287-4cc6-9ffa-ac4efe8e518b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:04:08.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-694" for this suite. Jan 27 14:04:14.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:04:15.081: INFO: namespace dns-694 deletion completed in 6.150058774s • [SLOW TEST:25.784 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:04:15.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 27 14:04:24.317: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:04:24.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2237" for this suite. Jan 27 14:04:30.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:04:30.545: INFO: namespace container-runtime-2237 deletion completed in 6.174211844s • [SLOW TEST:15.464 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:04:30.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:05:00.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3883" for this suite. Jan 27 14:05:07.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:05:07.198: INFO: namespace namespaces-3883 deletion completed in 6.211680981s STEP: Destroying namespace "nsdeletetest-1173" for this suite. Jan 27 14:05:07.201: INFO: Namespace nsdeletetest-1173 was already deleted STEP: Destroying namespace "nsdeletetest-8598" for this suite. Jan 27 14:05:13.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:05:13.370: INFO: namespace nsdeletetest-8598 deletion completed in 6.169545943s • [SLOW TEST:42.823 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:05:13.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 27 14:05:24.073: INFO: Successfully updated pod "pod-update-activedeadlineseconds-dbedc220-3c00-49eb-8d10-92a1f1dc171f" Jan 27 14:05:24.073: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-dbedc220-3c00-49eb-8d10-92a1f1dc171f" in namespace "pods-1781" to be "terminated due to deadline exceeded" Jan 27 14:05:24.113: INFO: Pod "pod-update-activedeadlineseconds-dbedc220-3c00-49eb-8d10-92a1f1dc171f": Phase="Running", Reason="", readiness=true. Elapsed: 40.350563ms Jan 27 14:05:26.124: INFO: Pod "pod-update-activedeadlineseconds-dbedc220-3c00-49eb-8d10-92a1f1dc171f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.051024207s Jan 27 14:05:26.124: INFO: Pod "pod-update-activedeadlineseconds-dbedc220-3c00-49eb-8d10-92a1f1dc171f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:05:26.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1781" for this suite. Jan 27 14:05:32.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:05:32.318: INFO: namespace pods-1781 deletion completed in 6.186406256s • [SLOW TEST:18.947 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:05:32.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 27 14:05:32.431: INFO: Waiting up to 5m0s for pod "pod-b728a569-077f-4ace-a57d-79cd39604f49" in namespace "emptydir-1745" to be "success or failure" Jan 27 14:05:32.438: INFO: Pod "pod-b728a569-077f-4ace-a57d-79cd39604f49": Phase="Pending", Reason="", readiness=false. Elapsed: 7.170715ms Jan 27 14:05:34.452: INFO: Pod "pod-b728a569-077f-4ace-a57d-79cd39604f49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020732217s Jan 27 14:05:36.465: INFO: Pod "pod-b728a569-077f-4ace-a57d-79cd39604f49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033852357s Jan 27 14:05:38.487: INFO: Pod "pod-b728a569-077f-4ace-a57d-79cd39604f49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055783539s Jan 27 14:05:40.505: INFO: Pod "pod-b728a569-077f-4ace-a57d-79cd39604f49": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073380261s Jan 27 14:05:42.520: INFO: Pod "pod-b728a569-077f-4ace-a57d-79cd39604f49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088727995s STEP: Saw pod success Jan 27 14:05:42.520: INFO: Pod "pod-b728a569-077f-4ace-a57d-79cd39604f49" satisfied condition "success or failure" Jan 27 14:05:42.526: INFO: Trying to get logs from node iruya-node pod pod-b728a569-077f-4ace-a57d-79cd39604f49 container test-container: STEP: delete the pod Jan 27 14:05:42.613: INFO: Waiting for pod pod-b728a569-077f-4ace-a57d-79cd39604f49 to disappear Jan 27 14:05:42.647: INFO: Pod pod-b728a569-077f-4ace-a57d-79cd39604f49 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:05:42.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1745" for this suite. Jan 27 14:05:48.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:05:48.768: INFO: namespace emptydir-1745 deletion completed in 6.115434845s • [SLOW TEST:16.449 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:05:48.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3876, will wait for the garbage collector to delete the pods Jan 27 14:05:58.961: INFO: Deleting Job.batch foo took: 9.034946ms Jan 27 14:05:59.261: INFO: Terminating Job.batch foo pods took: 300.493005ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:06:46.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3876" for this suite. Jan 27 14:06:54.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:06:54.732: INFO: namespace job-3876 deletion completed in 8.141194155s • [SLOW TEST:65.963 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:06:54.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-d90a4ef1-e60a-4622-ac0a-fdd8751cf422 STEP: Creating a pod to test consume configMaps Jan 27 14:06:54.812: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-388ba8e4-6ef3-4b23-880f-33691adb0ed2" in namespace "projected-613" to be "success or failure" Jan 27 14:06:54.861: INFO: Pod "pod-projected-configmaps-388ba8e4-6ef3-4b23-880f-33691adb0ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 49.009092ms Jan 27 14:06:56.886: INFO: Pod "pod-projected-configmaps-388ba8e4-6ef3-4b23-880f-33691adb0ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07373636s Jan 27 14:06:58.895: INFO: Pod "pod-projected-configmaps-388ba8e4-6ef3-4b23-880f-33691adb0ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082488422s Jan 27 14:07:00.907: INFO: Pod "pod-projected-configmaps-388ba8e4-6ef3-4b23-880f-33691adb0ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094664271s Jan 27 14:07:02.912: INFO: Pod "pod-projected-configmaps-388ba8e4-6ef3-4b23-880f-33691adb0ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099803665s Jan 27 14:07:04.933: INFO: Pod "pod-projected-configmaps-388ba8e4-6ef3-4b23-880f-33691adb0ed2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.120363648s STEP: Saw pod success Jan 27 14:07:04.933: INFO: Pod "pod-projected-configmaps-388ba8e4-6ef3-4b23-880f-33691adb0ed2" satisfied condition "success or failure" Jan 27 14:07:04.939: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-388ba8e4-6ef3-4b23-880f-33691adb0ed2 container projected-configmap-volume-test: STEP: delete the pod Jan 27 14:07:05.100: INFO: Waiting for pod pod-projected-configmaps-388ba8e4-6ef3-4b23-880f-33691adb0ed2 to disappear Jan 27 14:07:05.106: INFO: Pod pod-projected-configmaps-388ba8e4-6ef3-4b23-880f-33691adb0ed2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:07:05.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-613" for this suite. Jan 27 14:07:11.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:07:11.224: INFO: namespace projected-613 deletion completed in 6.113328243s • [SLOW TEST:16.492 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:07:11.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 27 14:07:11.336: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51b5dd65-5040-4013-8b94-b2c5007f2133" in namespace "downward-api-9206" to be "success or failure" Jan 27 14:07:11.343: INFO: Pod "downwardapi-volume-51b5dd65-5040-4013-8b94-b2c5007f2133": Phase="Pending", Reason="", readiness=false. Elapsed: 6.977708ms Jan 27 14:07:13.356: INFO: Pod "downwardapi-volume-51b5dd65-5040-4013-8b94-b2c5007f2133": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019667007s Jan 27 14:07:15.366: INFO: Pod "downwardapi-volume-51b5dd65-5040-4013-8b94-b2c5007f2133": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029799608s Jan 27 14:07:17.384: INFO: Pod "downwardapi-volume-51b5dd65-5040-4013-8b94-b2c5007f2133": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047980997s Jan 27 14:07:19.404: INFO: Pod "downwardapi-volume-51b5dd65-5040-4013-8b94-b2c5007f2133": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068113732s STEP: Saw pod success Jan 27 14:07:19.404: INFO: Pod "downwardapi-volume-51b5dd65-5040-4013-8b94-b2c5007f2133" satisfied condition "success or failure" Jan 27 14:07:19.410: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-51b5dd65-5040-4013-8b94-b2c5007f2133 container client-container: STEP: delete the pod Jan 27 14:07:19.556: INFO: Waiting for pod downwardapi-volume-51b5dd65-5040-4013-8b94-b2c5007f2133 to disappear Jan 27 14:07:19.575: INFO: Pod downwardapi-volume-51b5dd65-5040-4013-8b94-b2c5007f2133 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:07:19.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9206" for this suite. Jan 27 14:07:25.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:07:25.722: INFO: namespace downward-api-9206 deletion completed in 6.143348147s • [SLOW TEST:14.498 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:07:25.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-2ec8669a-ba8e-4758-9a47-392c96cc0474 STEP: Creating a pod to test consume secrets Jan 27 14:07:25.853: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-60afb2ae-6074-4ec4-8b80-52fd9f546c09" in namespace "projected-4192" to be "success or failure" Jan 27 14:07:25.866: INFO: Pod "pod-projected-secrets-60afb2ae-6074-4ec4-8b80-52fd9f546c09": Phase="Pending", Reason="", readiness=false. Elapsed: 12.284973ms Jan 27 14:07:27.876: INFO: Pod "pod-projected-secrets-60afb2ae-6074-4ec4-8b80-52fd9f546c09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022469925s Jan 27 14:07:29.886: INFO: Pod "pod-projected-secrets-60afb2ae-6074-4ec4-8b80-52fd9f546c09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032987632s Jan 27 14:07:31.896: INFO: Pod "pod-projected-secrets-60afb2ae-6074-4ec4-8b80-52fd9f546c09": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042888958s Jan 27 14:07:33.906: INFO: Pod "pod-projected-secrets-60afb2ae-6074-4ec4-8b80-52fd9f546c09": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052324558s Jan 27 14:07:35.920: INFO: Pod "pod-projected-secrets-60afb2ae-6074-4ec4-8b80-52fd9f546c09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066720462s STEP: Saw pod success Jan 27 14:07:35.920: INFO: Pod "pod-projected-secrets-60afb2ae-6074-4ec4-8b80-52fd9f546c09" satisfied condition "success or failure" Jan 27 14:07:35.927: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-60afb2ae-6074-4ec4-8b80-52fd9f546c09 container projected-secret-volume-test: STEP: delete the pod Jan 27 14:07:36.086: INFO: Waiting for pod pod-projected-secrets-60afb2ae-6074-4ec4-8b80-52fd9f546c09 to disappear Jan 27 14:07:36.156: INFO: Pod pod-projected-secrets-60afb2ae-6074-4ec4-8b80-52fd9f546c09 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:07:36.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4192" for this suite. Jan 27 14:07:42.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:07:42.307: INFO: namespace projected-4192 deletion completed in 6.143398293s • [SLOW TEST:16.584 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:07:42.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-1c915982-2d7e-438c-a766-69bdb1b91fc2 STEP: Creating configMap with name cm-test-opt-upd-04132522-c6b3-4a82-a202-e59b59f78b73 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1c915982-2d7e-438c-a766-69bdb1b91fc2 STEP: Updating configmap cm-test-opt-upd-04132522-c6b3-4a82-a202-e59b59f78b73 STEP: Creating configMap with name cm-test-opt-create-d931ac12-40d4-4ddc-9554-06e4327def78 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:09:00.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-961" for this suite. Jan 27 14:09:22.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:09:22.937: INFO: namespace projected-961 deletion completed in 22.157470268s • [SLOW TEST:100.630 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:09:22.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 27 14:09:23.043: INFO: Creating deployment "nginx-deployment" Jan 27 14:09:23.052: INFO: Waiting for observed generation 1 Jan 27 14:09:25.432: INFO: Waiting for all required pods to come up Jan 27 14:09:25.453: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 27 14:09:52.199: INFO: Waiting for deployment "nginx-deployment" to complete Jan 27 14:09:52.246: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 27 14:09:52.268: INFO: Updating deployment nginx-deployment Jan 27 14:09:52.268: INFO: Waiting for observed generation 2 Jan 27 14:09:54.968: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 27 14:09:54.974: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 27 14:09:55.024: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 27 14:09:55.674: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 27 14:09:55.674: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 27 14:09:55.680: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 27 14:09:55.688: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 27 14:09:55.688: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 27 14:09:55.700: INFO: Updating deployment nginx-deployment Jan 27 14:09:55.700: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 27 14:09:55.900: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 27 14:09:55.965: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 27 14:09:57.757: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-5130,SelfLink:/apis/apps/v1/namespaces/deployment-5130/deployments/nginx-deployment,UID:ba78cdc3-a6f5-448d-afdd-41a2796a4afe,ResourceVersion:22070252,Generation:3,CreationTimestamp:2020-01-27 14:09:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-27 14:09:53 +0000 UTC 2020-01-27 14:09:23 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-01-27 14:09:55 +0000 UTC 2020-01-27 14:09:55 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 27 14:09:59.262: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-5130,SelfLink:/apis/apps/v1/namespaces/deployment-5130/replicasets/nginx-deployment-55fb7cb77f,UID:c4601581-a37e-4ddc-b060-ff721d61bde0,ResourceVersion:22070248,Generation:3,CreationTimestamp:2020-01-27 14:09:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ba78cdc3-a6f5-448d-afdd-41a2796a4afe 0xc003078e97 0xc003078e98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 27 14:09:59.262: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 27 14:09:59.262: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-5130,SelfLink:/apis/apps/v1/namespaces/deployment-5130/replicasets/nginx-deployment-7b8c6f4498,UID:76f88f25-9620-4043-a51d-e2fa394f41b5,ResourceVersion:22070246,Generation:3,CreationTimestamp:2020-01-27 14:09:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ba78cdc3-a6f5-448d-afdd-41a2796a4afe 0xc003078f67 0xc003078f68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 27 14:10:02.099: INFO: Pod "nginx-deployment-55fb7cb77f-25c6b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-25c6b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-55fb7cb77f-25c6b,UID:9bd30012-9bc0-42f1-bacb-683a73af7a2f,ResourceVersion:22070280,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4601581-a37e-4ddc-b060-ff721d61bde0 0xc0030798e7 0xc0030798e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003079960} {node.kubernetes.io/unreachable Exists NoExecute 0xc003079980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.099: INFO: Pod "nginx-deployment-55fb7cb77f-8c7ph" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8c7ph,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-55fb7cb77f-8c7ph,UID:15e4ee19-d549-4b7b-a2d4-6ed110f10959,ResourceVersion:22070285,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4601581-a37e-4ddc-b060-ff721d61bde0 0xc003079a07 0xc003079a08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003079ab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003079ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.100: INFO: Pod "nginx-deployment-55fb7cb77f-8mwvs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8mwvs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-55fb7cb77f-8mwvs,UID:9383be97-4b1a-4aa1-929a-0993b811d821,ResourceVersion:22070266,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4601581-a37e-4ddc-b060-ff721d61bde0 0xc003079b67 0xc003079b68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003079bf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003079c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.100: INFO: Pod "nginx-deployment-55fb7cb77f-8wbbn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8wbbn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-55fb7cb77f-8wbbn,UID:2484dfca-9b21-4fe6-a96d-84ab0978ab86,ResourceVersion:22070216,Generation:0,CreationTimestamp:2020-01-27 14:09:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4601581-a37e-4ddc-b060-ff721d61bde0 0xc003079c97 0xc003079c98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003079d00} {node.kubernetes.io/unreachable Exists NoExecute 0xc003079d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-27 14:09:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.100: INFO: Pod "nginx-deployment-55fb7cb77f-bdhj4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bdhj4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-55fb7cb77f-bdhj4,UID:640169b4-3beb-4ca3-ac6c-004c3e66ff51,ResourceVersion:22070268,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4601581-a37e-4ddc-b060-ff721d61bde0 0xc003079df7 0xc003079df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003079e60} {node.kubernetes.io/unreachable Exists NoExecute 0xc003079e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.100: INFO: Pod "nginx-deployment-55fb7cb77f-gn5qf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gn5qf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-55fb7cb77f-gn5qf,UID:9285284d-b7a3-413e-b2d0-3d91fe999452,ResourceVersion:22070212,Generation:0,CreationTimestamp:2020-01-27 14:09:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4601581-a37e-4ddc-b060-ff721d61bde0 0xc003079f07 0xc003079f08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003079f80} {node.kubernetes.io/unreachable Exists NoExecute 0xc003079fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-27 14:09:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.101: INFO: Pod "nginx-deployment-55fb7cb77f-hfp4r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hfp4r,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-55fb7cb77f-hfp4r,UID:7da7cee0-4ead-47c6-8762-71b6a2ebb8ed,ResourceVersion:22070283,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4601581-a37e-4ddc-b060-ff721d61bde0 0xc0024b4087 0xc0024b4088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b4120} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b4140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.101: INFO: Pod "nginx-deployment-55fb7cb77f-ht7jz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ht7jz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-55fb7cb77f-ht7jz,UID:fc61f85c-ac96-439c-89af-e101ce9e5915,ResourceVersion:22070234,Generation:0,CreationTimestamp:2020-01-27 14:09:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4601581-a37e-4ddc-b060-ff721d61bde0 0xc0024b4227 0xc0024b4228}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b4380} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b43a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-27 14:09:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.101: INFO: Pod "nginx-deployment-55fb7cb77f-mcqlg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mcqlg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-55fb7cb77f-mcqlg,UID:c3db8fbf-61b0-4ad0-97a5-f960a224ab0c,ResourceVersion:22070282,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4601581-a37e-4ddc-b060-ff721d61bde0 0xc0024b46d7 0xc0024b46d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b47d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b47f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.101: INFO: Pod "nginx-deployment-55fb7cb77f-n6n54" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-n6n54,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-55fb7cb77f-n6n54,UID:bf2ce3fc-2c83-4e0b-ba69-d34f7546f04e,ResourceVersion:22070244,Generation:0,CreationTimestamp:2020-01-27 14:09:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4601581-a37e-4ddc-b060-ff721d61bde0 0xc0024b4887 0xc0024b4888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b4900} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b4920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-27 14:09:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.102: INFO: Pod "nginx-deployment-55fb7cb77f-qzvff" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qzvff,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-55fb7cb77f-qzvff,UID:1a4b91c6-aab1-4e2d-9529-db371dec2961,ResourceVersion:22070224,Generation:0,CreationTimestamp:2020-01-27 14:09:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4601581-a37e-4ddc-b060-ff721d61bde0 0xc0024b4ab7 0xc0024b4ab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b4b50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b4b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-27 14:09:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.102: INFO: Pod "nginx-deployment-55fb7cb77f-tgv2b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tgv2b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-55fb7cb77f-tgv2b,UID:b6bf879b-588c-434d-9a6a-b768cd4b3388,ResourceVersion:22070301,Generation:0,CreationTimestamp:2020-01-27 14:09:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4601581-a37e-4ddc-b060-ff721d61bde0 0xc0024b4c67 0xc0024b4c68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b4cf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b4d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-27 14:09:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.102: INFO: Pod "nginx-deployment-55fb7cb77f-xc4rd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xc4rd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-55fb7cb77f-xc4rd,UID:001fecae-88d6-463d-8eea-87b919c6c4c7,ResourceVersion:22070294,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4601581-a37e-4ddc-b060-ff721d61bde0 0xc0024b4de7 0xc0024b4de8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b4e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b4e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:59 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.102: INFO: Pod "nginx-deployment-7b8c6f4498-2npz6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2npz6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-2npz6,UID:52426a7b-ab27-4b8f-9807-e21ad3c8fe51,ResourceVersion:22070284,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc0024b4ef7 0xc0024b4ef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b4f70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b4f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.103: INFO: Pod "nginx-deployment-7b8c6f4498-5zh98" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5zh98,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-5zh98,UID:b6e7b598-7361-4748-89bd-4dac1584fd93,ResourceVersion:22070172,Generation:0,CreationTimestamp:2020-01-27 14:09:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc0024b5027 0xc0024b5028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b5090} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b50b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-27 14:09:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 14:09:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f53b01b8e697b95fee9b38973e39e0a007a4e8e8d59645eba19cf54c08931d38}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.103: INFO: Pod "nginx-deployment-7b8c6f4498-6vgtq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6vgtq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-6vgtq,UID:a06b282c-56d3-47db-a151-784657a23543,ResourceVersion:22070296,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc0024b5187 0xc0024b5188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b51f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b5210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:59 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.103: INFO: Pod "nginx-deployment-7b8c6f4498-8vlvt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8vlvt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-8vlvt,UID:d713020d-2a70-4901-b3bc-b5057bc37e44,ResourceVersion:22070176,Generation:0,CreationTimestamp:2020-01-27 14:09:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc0024b5297 0xc0024b5298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b5310} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b5330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-27 14:09:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 14:09:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5415a22075b9722feeec6d03d6472a05b13ba0ca2af8c04eefb9709d852f34fe}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.103: INFO: Pod "nginx-deployment-7b8c6f4498-9d56c" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9d56c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-9d56c,UID:c6f938f1-eb9f-43e6-a9ec-238f0b16494c,ResourceVersion:22070159,Generation:0,CreationTimestamp:2020-01-27 14:09:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc0024b5407 0xc0024b5408}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b5470} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b5490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-27 14:09:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 14:09:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://930a62800e173877a77446dc60da93c0c4a7f508409b997e81d5d89ca1fa5784}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.103: INFO: Pod "nginx-deployment-7b8c6f4498-c9bpp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-c9bpp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-c9bpp,UID:33e1ffce-092a-4d84-96d3-49161b262ba3,ResourceVersion:22070186,Generation:0,CreationTimestamp:2020-01-27 14:09:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc0024b5567 0xc0024b5568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b55e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b5600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-27 14:09:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 14:09:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://229a2c477cb52b0432142386e10e2f1fa47161215594d8a1bd1cbd1e62ad3635}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.103: INFO: Pod "nginx-deployment-7b8c6f4498-cw2tm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cw2tm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-cw2tm,UID:fc9879de-082f-4862-8c03-e2473eae7ffb,ResourceVersion:22070162,Generation:0,CreationTimestamp:2020-01-27 14:09:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc0024b56d7 0xc0024b56d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b5740} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b5760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-27 14:09:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 14:09:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2e99281f2d8e10a9a1288d772cbacd1ac39421dca46247ebe8160ad42135c58b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.104: INFO: Pod "nginx-deployment-7b8c6f4498-f597m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f597m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-f597m,UID:ff5eb039-8573-4c85-96b4-6bd8efd5f0f9,ResourceVersion:22070279,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc0024b5847 0xc0024b5848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b58b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b58d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.104: INFO: Pod "nginx-deployment-7b8c6f4498-ht8n8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ht8n8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-ht8n8,UID:291d5a09-5ebb-48e2-9f7a-0d8ba83ccc61,ResourceVersion:22070183,Generation:0,CreationTimestamp:2020-01-27 14:09:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc0024b5957 0xc0024b5958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b59e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b5a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-27 14:09:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 14:09:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5f0c4fd7a0e1e10a938e2e832f2af52e4262536d8bd99ebc5f3ded3ce1491545}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.104: INFO: Pod "nginx-deployment-7b8c6f4498-n7srb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n7srb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-n7srb,UID:11110086-aeed-4f78-b1de-3f64055b304a,ResourceVersion:22070255,Generation:0,CreationTimestamp:2020-01-27 14:09:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc0024b5ae7 0xc0024b5ae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b5b60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b5b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.104: INFO: Pod "nginx-deployment-7b8c6f4498-nfgn4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nfgn4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-nfgn4,UID:d3415e60-a5ce-4106-9cae-6c0e7c0f420a,ResourceVersion:22070295,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc0024b5c17 0xc0024b5c18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b5ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b5cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:59 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.104: INFO: Pod "nginx-deployment-7b8c6f4498-p9q8d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p9q8d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-p9q8d,UID:32912ffa-ddc8-4687-9f7d-684ca0100beb,ResourceVersion:22070270,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc0024b5d77 0xc0024b5d78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b5de0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b5e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.105: INFO: Pod "nginx-deployment-7b8c6f4498-pzmn9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pzmn9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-pzmn9,UID:107f599f-3c32-4c29-a080-d74e04439509,ResourceVersion:22070173,Generation:0,CreationTimestamp:2020-01-27 14:09:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc0024b5e87 0xc0024b5e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024b5f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024b5f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-27 14:09:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 14:09:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://53f3fba89ee85450eb0ddca37299ccb8225bd8d192bc1b60af919f4ab2790bd5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.105: INFO: Pod "nginx-deployment-7b8c6f4498-rlmg6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rlmg6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-rlmg6,UID:62cff0cc-d98c-4710-ba25-eea16225399d,ResourceVersion:22070278,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc0024b5ff7 0xc0024b5ff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002aec070} {node.kubernetes.io/unreachable Exists NoExecute 0xc002aec090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.105: INFO: Pod "nginx-deployment-7b8c6f4498-rrn7v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rrn7v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-rrn7v,UID:904cbbe9-e2db-448b-bb95-3493b89c1769,ResourceVersion:22070265,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc002aec117 0xc002aec118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002aec190} {node.kubernetes.io/unreachable Exists NoExecute 0xc002aec1b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.105: INFO: Pod "nginx-deployment-7b8c6f4498-rwjbm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rwjbm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-rwjbm,UID:369afadc-c0c5-4373-a124-f677af684c93,ResourceVersion:22070281,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc002aec237 0xc002aec238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002aec2b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002aec2d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.105: INFO: Pod "nginx-deployment-7b8c6f4498-s7w28" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s7w28,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-s7w28,UID:c3e158af-b9a9-4d4a-96c5-a119e23914cb,ResourceVersion:22070298,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc002aec357 0xc002aec358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002aec3d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002aec3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:59 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.105: INFO: Pod "nginx-deployment-7b8c6f4498-vvwrm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vvwrm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-vvwrm,UID:34315966-fe25-4f6d-af28-c6db52dc0a7e,ResourceVersion:22070297,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc002aec477 0xc002aec478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002aec4e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002aec500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:59 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.106: INFO: Pod "nginx-deployment-7b8c6f4498-xt2h2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xt2h2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-xt2h2,UID:1431fdbe-0548-4d8a-96fc-7c1a1cea6fc2,ResourceVersion:22070169,Generation:0,CreationTimestamp:2020-01-27 14:09:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc002aec587 0xc002aec588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002aec5f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002aec610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2020-01-27 14:09:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 14:09:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1b675cef1763efbaa7b5220850faa2bb528594be0293248fb98d5be6f441796b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 27 14:10:02.106: INFO: Pod "nginx-deployment-7b8c6f4498-xwqkp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xwqkp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/nginx-deployment-7b8c6f4498-xwqkp,UID:0751f17d-53db-4814-990e-5303d2aa1f3f,ResourceVersion:22070299,Generation:0,CreationTimestamp:2020-01-27 14:09:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 76f88f25-9620-4043-a51d-e2fa394f41b5 0xc002aec6e7 0xc002aec6e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8chz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8chz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f8chz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002aec760} {node.kubernetes.io/unreachable Exists NoExecute 0xc002aec780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:09:59 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:10:02.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5130" for this suite. Jan 27 14:11:11.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:11:14.290: INFO: namespace deployment-5130 deletion completed in 1m11.947225266s • [SLOW TEST:111.351 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:11:14.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 27 14:11:15.424: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 27 14:11:42.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5487" for this suite. Jan 27 14:11:48.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 14:11:48.727: INFO: namespace init-container-5487 deletion completed in 6.218930623s • [SLOW TEST:34.437 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 27 14:11:48.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 27 14:11:49.481: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 23.549753ms)
Jan 27 14:11:49.501: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.740146ms)
Jan 27 14:11:49.522: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.807124ms)
Jan 27 14:11:49.536: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.703507ms)
Jan 27 14:11:49.550: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.310658ms)
Jan 27 14:11:49.604: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 53.176477ms)
Jan 27 14:11:49.615: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.951914ms)
Jan 27 14:11:49.626: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.695383ms)
Jan 27 14:11:49.636: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.010277ms)
Jan 27 14:11:49.644: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.765044ms)
Jan 27 14:11:49.652: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.268526ms)
Jan 27 14:11:49.659: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.001601ms)
Jan 27 14:11:49.666: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.203166ms)
Jan 27 14:11:49.674: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.139289ms)
Jan 27 14:11:49.681: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.721725ms)
Jan 27 14:11:49.688: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.941079ms)
Jan 27 14:11:49.695: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.598207ms)
Jan 27 14:11:49.703: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.466442ms)
Jan 27 14:11:49.709: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.152668ms)
Jan 27 14:11:49.715: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.800989ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:11:49.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7819" for this suite.
Jan 27 14:11:55.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:11:55.893: INFO: namespace proxy-7819 deletion completed in 6.1724691s

• [SLOW TEST:7.164 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:11:55.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-8fb1fa83-4c26-40fd-92fd-1c24229d4fae in namespace container-probe-3022
Jan 27 14:12:06.047: INFO: Started pod busybox-8fb1fa83-4c26-40fd-92fd-1c24229d4fae in namespace container-probe-3022
STEP: checking the pod's current state and verifying that restartCount is present
Jan 27 14:12:06.052: INFO: Initial restart count of pod busybox-8fb1fa83-4c26-40fd-92fd-1c24229d4fae is 0
Jan 27 14:12:56.331: INFO: Restart count of pod container-probe-3022/busybox-8fb1fa83-4c26-40fd-92fd-1c24229d4fae is now 1 (50.278964817s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:12:56.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3022" for this suite.
Jan 27 14:13:02.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:13:02.525: INFO: namespace container-probe-3022 deletion completed in 6.12685595s

• [SLOW TEST:66.629 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:13:02.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 27 14:13:20.754: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 14:13:20.813: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 14:13:22.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 14:13:22.824: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 14:13:24.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 14:13:24.822: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 14:13:26.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 14:13:26.832: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 14:13:28.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 14:13:28.825: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 14:13:30.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 14:13:30.824: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 14:13:32.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 14:13:32.822: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 14:13:34.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 14:13:34.825: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 14:13:36.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 14:13:36.822: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 14:13:38.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 14:13:38.825: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 14:13:40.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 14:13:40.830: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 14:13:42.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 14:13:42.823: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 14:13:44.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 14:13:44.823: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 14:13:46.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 14:13:46.818: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:13:46.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8888" for this suite.
Jan 27 14:14:08.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:14:08.981: INFO: namespace container-lifecycle-hook-8888 deletion completed in 22.123620942s

• [SLOW TEST:66.455 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:14:08.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-434df058-8834-4b2d-86a5-ad680407ff56 in namespace container-probe-3335
Jan 27 14:14:17.138: INFO: Started pod test-webserver-434df058-8834-4b2d-86a5-ad680407ff56 in namespace container-probe-3335
STEP: checking the pod's current state and verifying that restartCount is present
Jan 27 14:14:17.143: INFO: Initial restart count of pod test-webserver-434df058-8834-4b2d-86a5-ad680407ff56 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:18:18.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3335" for this suite.
Jan 27 14:18:24.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:18:24.986: INFO: namespace container-probe-3335 deletion completed in 6.201263556s

• [SLOW TEST:256.004 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:18:24.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan 27 14:18:25.168: INFO: namespace kubectl-8022
Jan 27 14:18:25.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8022'
Jan 27 14:18:27.292: INFO: stderr: ""
Jan 27 14:18:27.293: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 27 14:18:28.317: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:18:28.317: INFO: Found 0 / 1
Jan 27 14:18:29.375: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:18:29.375: INFO: Found 0 / 1
Jan 27 14:18:30.305: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:18:30.305: INFO: Found 0 / 1
Jan 27 14:18:31.361: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:18:31.361: INFO: Found 0 / 1
Jan 27 14:18:32.344: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:18:32.344: INFO: Found 0 / 1
Jan 27 14:18:33.382: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:18:33.383: INFO: Found 0 / 1
Jan 27 14:18:34.305: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:18:34.305: INFO: Found 0 / 1
Jan 27 14:18:35.322: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:18:35.323: INFO: Found 0 / 1
Jan 27 14:18:36.306: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:18:36.306: INFO: Found 1 / 1
Jan 27 14:18:36.306: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 27 14:18:36.314: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:18:36.314: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 27 14:18:36.314: INFO: wait on redis-master startup in kubectl-8022 
Jan 27 14:18:36.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9sgml redis-master --namespace=kubectl-8022'
Jan 27 14:18:36.544: INFO: stderr: ""
Jan 27 14:18:36.544: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 27 Jan 14:18:35.024 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Jan 14:18:35.024 # Server started, Redis version 3.2.12\n1:M 27 Jan 14:18:35.024 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Jan 14:18:35.024 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 27 14:18:36.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8022'
Jan 27 14:18:36.788: INFO: stderr: ""
Jan 27 14:18:36.788: INFO: stdout: "service/rm2 exposed\n"
Jan 27 14:18:36.814: INFO: Service rm2 in namespace kubectl-8022 found.
STEP: exposing service
Jan 27 14:18:38.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8022'
Jan 27 14:18:38.997: INFO: stderr: ""
Jan 27 14:18:38.997: INFO: stdout: "service/rm3 exposed\n"
Jan 27 14:18:39.012: INFO: Service rm3 in namespace kubectl-8022 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:18:41.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8022" for this suite.
Jan 27 14:19:03.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:19:03.155: INFO: namespace kubectl-8022 deletion completed in 22.125605149s

• [SLOW TEST:38.169 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:19:03.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-hcllh in namespace proxy-2074
I0127 14:19:03.323447       8 runners.go:180] Created replication controller with name: proxy-service-hcllh, namespace: proxy-2074, replica count: 1
I0127 14:19:04.374293       8 runners.go:180] proxy-service-hcllh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 14:19:05.374583       8 runners.go:180] proxy-service-hcllh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 14:19:06.374813       8 runners.go:180] proxy-service-hcllh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 14:19:07.375035       8 runners.go:180] proxy-service-hcllh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 14:19:08.375275       8 runners.go:180] proxy-service-hcllh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 14:19:09.375551       8 runners.go:180] proxy-service-hcllh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 14:19:10.375768       8 runners.go:180] proxy-service-hcllh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 14:19:11.376002       8 runners.go:180] proxy-service-hcllh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 14:19:12.376316       8 runners.go:180] proxy-service-hcllh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0127 14:19:13.376629       8 runners.go:180] proxy-service-hcllh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0127 14:19:14.376857       8 runners.go:180] proxy-service-hcllh Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 27 14:19:14.383: INFO: setup took 11.184202152s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 27 14:19:14.407: INFO: (0) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 24.278727ms)
Jan 27 14:19:14.407: INFO: (0) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:1080/proxy/: ... (200; 24.526589ms)
Jan 27 14:19:14.407: INFO: (0) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 24.542652ms)
Jan 27 14:19:14.408: INFO: (0) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname1/proxy/: foo (200; 25.426132ms)
Jan 27 14:19:14.409: INFO: (0) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 25.814302ms)
Jan 27 14:19:14.409: INFO: (0) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname2/proxy/: bar (200; 26.069958ms)
Jan 27 14:19:14.409: INFO: (0) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 26.700711ms)
Jan 27 14:19:14.410: INFO: (0) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p/proxy/: test (200; 26.668211ms)
Jan 27 14:19:14.410: INFO: (0) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname2/proxy/: bar (200; 26.796334ms)
Jan 27 14:19:14.410: INFO: (0) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 27.48792ms)
Jan 27 14:19:14.423: INFO: (0) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname1/proxy/: foo (200; 39.889099ms)
Jan 27 14:19:14.427: INFO: (0) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname1/proxy/: tls baz (200; 43.923434ms)
Jan 27 14:19:14.427: INFO: (0) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: ... (200; 21.241793ms)
Jan 27 14:19:14.450: INFO: (1) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:460/proxy/: tls baz (200; 21.886172ms)
Jan 27 14:19:14.450: INFO: (1) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: test<... (200; 21.916246ms)
Jan 27 14:19:14.450: INFO: (1) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:462/proxy/: tls qux (200; 22.191966ms)
Jan 27 14:19:14.450: INFO: (1) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 21.957528ms)
Jan 27 14:19:14.450: INFO: (1) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 22.213589ms)
Jan 27 14:19:14.450: INFO: (1) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname2/proxy/: bar (200; 22.331753ms)
Jan 27 14:19:14.450: INFO: (1) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname1/proxy/: foo (200; 21.93253ms)
Jan 27 14:19:14.450: INFO: (1) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p/proxy/: test (200; 22.48309ms)
Jan 27 14:19:14.452: INFO: (1) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname2/proxy/: bar (200; 24.191434ms)
Jan 27 14:19:14.453: INFO: (1) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname1/proxy/: foo (200; 24.596586ms)
Jan 27 14:19:14.453: INFO: (1) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname1/proxy/: tls baz (200; 25.363188ms)
Jan 27 14:19:14.468: INFO: (2) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 14.473111ms)
Jan 27 14:19:14.469: INFO: (2) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p/proxy/: test (200; 14.799742ms)
Jan 27 14:19:14.469: INFO: (2) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:460/proxy/: tls baz (200; 15.572886ms)
Jan 27 14:19:14.470: INFO: (2) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 16.216978ms)
Jan 27 14:19:14.470: INFO: (2) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 16.404729ms)
Jan 27 14:19:14.470: INFO: (2) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:1080/proxy/: ... (200; 16.628057ms)
Jan 27 14:19:14.474: INFO: (2) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 20.04884ms)
Jan 27 14:19:14.474: INFO: (2) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:462/proxy/: tls qux (200; 20.667975ms)
Jan 27 14:19:14.474: INFO: (2) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 20.467795ms)
Jan 27 14:19:14.474: INFO: (2) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: test (200; 18.837014ms)
Jan 27 14:19:14.499: INFO: (3) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 18.036565ms)
Jan 27 14:19:14.499: INFO: (3) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:1080/proxy/: ... (200; 18.200133ms)
Jan 27 14:19:14.499: INFO: (3) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname2/proxy/: tls qux (200; 19.126148ms)
Jan 27 14:19:14.500: INFO: (3) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 20.385189ms)
Jan 27 14:19:14.500: INFO: (3) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:460/proxy/: tls baz (200; 19.108563ms)
Jan 27 14:19:14.500: INFO: (3) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname2/proxy/: bar (200; 19.069997ms)
Jan 27 14:19:14.500: INFO: (3) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 19.661324ms)
Jan 27 14:19:14.500: INFO: (3) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: test<... (200; 26.008224ms)
Jan 27 14:19:14.507: INFO: (3) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname2/proxy/: bar (200; 27.281467ms)
Jan 27 14:19:14.508: INFO: (3) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname1/proxy/: foo (200; 27.40813ms)
Jan 27 14:19:14.518: INFO: (4) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 9.765701ms)
Jan 27 14:19:14.524: INFO: (4) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:460/proxy/: tls baz (200; 14.406172ms)
Jan 27 14:19:14.524: INFO: (4) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname2/proxy/: bar (200; 15.734008ms)
Jan 27 14:19:14.526: INFO: (4) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:462/proxy/: tls qux (200; 16.513108ms)
Jan 27 14:19:14.526: INFO: (4) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 17.33352ms)
Jan 27 14:19:14.526: INFO: (4) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname2/proxy/: tls qux (200; 17.625821ms)
Jan 27 14:19:14.526: INFO: (4) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 17.396395ms)
Jan 27 14:19:14.526: INFO: (4) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p/proxy/: test (200; 17.642371ms)
Jan 27 14:19:14.527: INFO: (4) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 16.960819ms)
Jan 27 14:19:14.527: INFO: (4) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:1080/proxy/: ... (200; 16.61066ms)
Jan 27 14:19:14.527: INFO: (4) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname1/proxy/: foo (200; 17.148845ms)
Jan 27 14:19:14.527: INFO: (4) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname2/proxy/: bar (200; 17.092244ms)
Jan 27 14:19:14.528: INFO: (4) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname1/proxy/: tls baz (200; 18.998842ms)
Jan 27 14:19:14.528: INFO: (4) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 17.485497ms)
Jan 27 14:19:14.528: INFO: (4) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname1/proxy/: foo (200; 18.586551ms)
Jan 27 14:19:14.529: INFO: (4) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: ... (200; 18.797346ms)
Jan 27 14:19:14.549: INFO: (5) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:462/proxy/: tls qux (200; 18.615062ms)
Jan 27 14:19:14.549: INFO: (5) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname2/proxy/: tls qux (200; 19.551362ms)
Jan 27 14:19:14.551: INFO: (5) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:460/proxy/: tls baz (200; 21.427777ms)
Jan 27 14:19:14.551: INFO: (5) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 21.659641ms)
Jan 27 14:19:14.552: INFO: (5) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname2/proxy/: bar (200; 21.319261ms)
Jan 27 14:19:14.552: INFO: (5) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p/proxy/: test (200; 22.815768ms)
Jan 27 14:19:14.553: INFO: (5) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname1/proxy/: foo (200; 22.220614ms)
Jan 27 14:19:14.553: INFO: (5) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 23.503515ms)
Jan 27 14:19:14.567: INFO: (6) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:462/proxy/: tls qux (200; 13.776418ms)
Jan 27 14:19:14.567: INFO: (6) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname1/proxy/: foo (200; 13.97866ms)
Jan 27 14:19:14.570: INFO: (6) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 15.504583ms)
Jan 27 14:19:14.570: INFO: (6) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 15.917094ms)
Jan 27 14:19:14.571: INFO: (6) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname1/proxy/: foo (200; 17.054667ms)
Jan 27 14:19:14.572: INFO: (6) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 18.228271ms)
Jan 27 14:19:14.572: INFO: (6) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:1080/proxy/: ... (200; 18.480894ms)
Jan 27 14:19:14.573: INFO: (6) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p/proxy/: test (200; 18.546888ms)
Jan 27 14:19:14.573: INFO: (6) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 18.758379ms)
Jan 27 14:19:14.573: INFO: (6) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname2/proxy/: bar (200; 19.23957ms)
Jan 27 14:19:14.573: INFO: (6) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 19.371476ms)
Jan 27 14:19:14.574: INFO: (6) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:460/proxy/: tls baz (200; 19.609883ms)
Jan 27 14:19:14.574: INFO: (6) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname2/proxy/: bar (200; 20.102238ms)
Jan 27 14:19:14.575: INFO: (6) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: ... (200; 8.754849ms)
Jan 27 14:19:14.585: INFO: (7) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 8.484179ms)
Jan 27 14:19:14.585: INFO: (7) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 8.749098ms)
Jan 27 14:19:14.585: INFO: (7) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 8.763966ms)
Jan 27 14:19:14.585: INFO: (7) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 8.639323ms)
Jan 27 14:19:14.585: INFO: (7) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p/proxy/: test (200; 8.883601ms)
Jan 27 14:19:14.585: INFO: (7) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 8.823147ms)
Jan 27 14:19:14.586: INFO: (7) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:462/proxy/: tls qux (200; 9.355019ms)
Jan 27 14:19:14.586: INFO: (7) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:460/proxy/: tls baz (200; 9.177327ms)
Jan 27 14:19:14.586: INFO: (7) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: test (200; 13.653521ms)
Jan 27 14:19:14.606: INFO: (8) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 13.920143ms)
Jan 27 14:19:14.607: INFO: (8) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 14.382733ms)
Jan 27 14:19:14.607: INFO: (8) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:460/proxy/: tls baz (200; 14.577069ms)
Jan 27 14:19:14.607: INFO: (8) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 14.79633ms)
Jan 27 14:19:14.607: INFO: (8) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 14.665583ms)
Jan 27 14:19:14.607: INFO: (8) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: ... (200; 15.685301ms)
Jan 27 14:19:14.608: INFO: (8) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname1/proxy/: foo (200; 15.883455ms)
Jan 27 14:19:14.609: INFO: (8) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname2/proxy/: tls qux (200; 17.40663ms)
Jan 27 14:19:14.610: INFO: (8) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname2/proxy/: bar (200; 17.999218ms)
Jan 27 14:19:14.610: INFO: (8) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname1/proxy/: tls baz (200; 18.191382ms)
Jan 27 14:19:14.611: INFO: (8) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname1/proxy/: foo (200; 18.545782ms)
Jan 27 14:19:14.619: INFO: (9) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:460/proxy/: tls baz (200; 8.040226ms)
Jan 27 14:19:14.620: INFO: (9) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 9.093406ms)
Jan 27 14:19:14.620: INFO: (9) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 9.204157ms)
Jan 27 14:19:14.620: INFO: (9) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p/proxy/: test (200; 9.459189ms)
Jan 27 14:19:14.621: INFO: (9) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname1/proxy/: foo (200; 9.602747ms)
Jan 27 14:19:14.621: INFO: (9) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 9.675468ms)
Jan 27 14:19:14.622: INFO: (9) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: test<... (200; 11.710002ms)
Jan 27 14:19:14.624: INFO: (9) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:1080/proxy/: ... (200; 12.564641ms)
Jan 27 14:19:14.626: INFO: (9) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname2/proxy/: tls qux (200; 15.333283ms)
Jan 27 14:19:14.627: INFO: (9) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname1/proxy/: tls baz (200; 15.753102ms)
Jan 27 14:19:14.627: INFO: (9) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname2/proxy/: bar (200; 15.734127ms)
Jan 27 14:19:14.627: INFO: (9) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname1/proxy/: foo (200; 16.0791ms)
Jan 27 14:19:14.638: INFO: (10) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 10.419903ms)
Jan 27 14:19:14.638: INFO: (10) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 10.463959ms)
Jan 27 14:19:14.638: INFO: (10) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 11.075651ms)
Jan 27 14:19:14.638: INFO: (10) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname1/proxy/: tls baz (200; 11.132163ms)
Jan 27 14:19:14.642: INFO: (10) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 14.686648ms)
Jan 27 14:19:14.643: INFO: (10) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname2/proxy/: bar (200; 15.240189ms)
Jan 27 14:19:14.644: INFO: (10) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p/proxy/: test (200; 16.063598ms)
Jan 27 14:19:14.644: INFO: (10) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname2/proxy/: tls qux (200; 16.643978ms)
Jan 27 14:19:14.644: INFO: (10) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: ... (200; 16.878734ms)
Jan 27 14:19:14.645: INFO: (10) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 17.360462ms)
Jan 27 14:19:14.645: INFO: (10) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname1/proxy/: foo (200; 17.6217ms)
Jan 27 14:19:14.657: INFO: (11) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 11.13472ms)
Jan 27 14:19:14.657: INFO: (11) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 11.37005ms)
Jan 27 14:19:14.657: INFO: (11) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 11.963243ms)
Jan 27 14:19:14.657: INFO: (11) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 11.778076ms)
Jan 27 14:19:14.657: INFO: (11) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 11.832859ms)
Jan 27 14:19:14.658: INFO: (11) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:1080/proxy/: ... (200; 12.149451ms)
Jan 27 14:19:14.658: INFO: (11) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:462/proxy/: tls qux (200; 12.592691ms)
Jan 27 14:19:14.658: INFO: (11) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:460/proxy/: tls baz (200; 12.581757ms)
Jan 27 14:19:14.658: INFO: (11) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname2/proxy/: tls qux (200; 12.530065ms)
Jan 27 14:19:14.658: INFO: (11) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname2/proxy/: bar (200; 12.706583ms)
Jan 27 14:19:14.658: INFO: (11) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p/proxy/: test (200; 12.859789ms)
Jan 27 14:19:14.658: INFO: (11) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname1/proxy/: foo (200; 13.044653ms)
Jan 27 14:19:14.658: INFO: (11) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname1/proxy/: foo (200; 13.287051ms)
Jan 27 14:19:14.658: INFO: (11) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: test (200; 14.161576ms)
Jan 27 14:19:14.676: INFO: (12) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 14.066486ms)
Jan 27 14:19:14.676: INFO: (12) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 14.236234ms)
Jan 27 14:19:14.676: INFO: (12) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname2/proxy/: tls qux (200; 14.096452ms)
Jan 27 14:19:14.677: INFO: (12) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:462/proxy/: tls qux (200; 14.488768ms)
Jan 27 14:19:14.677: INFO: (12) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:1080/proxy/: ... (200; 14.643662ms)
Jan 27 14:19:14.677: INFO: (12) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 14.555255ms)
Jan 27 14:19:14.682: INFO: (12) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname2/proxy/: bar (200; 19.983647ms)
Jan 27 14:19:14.692: INFO: (13) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:462/proxy/: tls qux (200; 9.480535ms)
Jan 27 14:19:14.692: INFO: (13) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 9.432265ms)
Jan 27 14:19:14.692: INFO: (13) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:1080/proxy/: ... (200; 9.503259ms)
Jan 27 14:19:14.692: INFO: (13) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 9.709147ms)
Jan 27 14:19:14.692: INFO: (13) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 9.647734ms)
Jan 27 14:19:14.692: INFO: (13) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 9.690634ms)
Jan 27 14:19:14.692: INFO: (13) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:460/proxy/: tls baz (200; 10.177141ms)
Jan 27 14:19:14.692: INFO: (13) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: test (200; 11.807734ms)
Jan 27 14:19:14.698: INFO: (13) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname2/proxy/: bar (200; 15.864798ms)
Jan 27 14:19:14.698: INFO: (13) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname1/proxy/: tls baz (200; 16.235895ms)
Jan 27 14:19:14.700: INFO: (13) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname2/proxy/: tls qux (200; 17.780527ms)
Jan 27 14:19:14.701: INFO: (13) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname1/proxy/: foo (200; 19.086897ms)
Jan 27 14:19:14.701: INFO: (13) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname1/proxy/: foo (200; 19.325436ms)
Jan 27 14:19:14.702: INFO: (13) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname2/proxy/: bar (200; 19.489922ms)
Jan 27 14:19:14.712: INFO: (14) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 9.870153ms)
Jan 27 14:19:14.712: INFO: (14) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 9.912084ms)
Jan 27 14:19:14.712: INFO: (14) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 10.109866ms)
Jan 27 14:19:14.712: INFO: (14) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:462/proxy/: tls qux (200; 10.05007ms)
Jan 27 14:19:14.712: INFO: (14) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: test<... (200; 10.121017ms)
Jan 27 14:19:14.712: INFO: (14) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p/proxy/: test (200; 10.02307ms)
Jan 27 14:19:14.712: INFO: (14) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:1080/proxy/: ... (200; 9.994401ms)
Jan 27 14:19:14.712: INFO: (14) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:460/proxy/: tls baz (200; 10.063845ms)
Jan 27 14:19:14.714: INFO: (14) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname1/proxy/: tls baz (200; 11.845267ms)
Jan 27 14:19:14.714: INFO: (14) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname2/proxy/: bar (200; 12.31299ms)
Jan 27 14:19:14.714: INFO: (14) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname1/proxy/: foo (200; 12.302085ms)
Jan 27 14:19:14.714: INFO: (14) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname2/proxy/: bar (200; 12.53306ms)
Jan 27 14:19:14.715: INFO: (14) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname2/proxy/: tls qux (200; 12.704996ms)
Jan 27 14:19:14.715: INFO: (14) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname1/proxy/: foo (200; 13.468334ms)
Jan 27 14:19:14.720: INFO: (15) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p/proxy/: test (200; 4.617262ms)
Jan 27 14:19:14.721: INFO: (15) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 5.174527ms)
Jan 27 14:19:14.722: INFO: (15) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 6.316035ms)
Jan 27 14:19:14.722: INFO: (15) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 6.447102ms)
Jan 27 14:19:14.723: INFO: (15) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:1080/proxy/: ... (200; 7.467719ms)
Jan 27 14:19:14.723: INFO: (15) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 7.815101ms)
Jan 27 14:19:14.724: INFO: (15) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: ... (200; 12.967058ms)
Jan 27 14:19:14.739: INFO: (16) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 13.317984ms)
Jan 27 14:19:14.739: INFO: (16) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p/proxy/: test (200; 13.213764ms)
Jan 27 14:19:14.740: INFO: (16) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname2/proxy/: bar (200; 13.874095ms)
Jan 27 14:19:14.740: INFO: (16) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 13.691393ms)
Jan 27 14:19:14.740: INFO: (16) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:462/proxy/: tls qux (200; 13.854454ms)
Jan 27 14:19:14.740: INFO: (16) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 14.101641ms)
Jan 27 14:19:14.747: INFO: (17) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p/proxy/: test (200; 6.373342ms)
Jan 27 14:19:14.749: INFO: (17) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: ... (200; 11.569195ms)
Jan 27 14:19:14.753: INFO: (17) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 11.802347ms)
Jan 27 14:19:14.753: INFO: (17) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:460/proxy/: tls baz (200; 11.793284ms)
Jan 27 14:19:14.753: INFO: (17) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 12.074354ms)
Jan 27 14:19:14.755: INFO: (17) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname2/proxy/: bar (200; 13.841003ms)
Jan 27 14:19:14.755: INFO: (17) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname1/proxy/: foo (200; 14.169644ms)
Jan 27 14:19:14.755: INFO: (17) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname2/proxy/: tls qux (200; 14.409019ms)
Jan 27 14:19:14.755: INFO: (17) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname1/proxy/: tls baz (200; 15.041899ms)
Jan 27 14:19:14.756: INFO: (17) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname1/proxy/: foo (200; 15.09406ms)
Jan 27 14:19:14.761: INFO: (18) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: test (200; 18.547856ms)
Jan 27 14:19:14.775: INFO: (18) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:460/proxy/: tls baz (200; 18.67583ms)
Jan 27 14:19:14.775: INFO: (18) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 18.81464ms)
Jan 27 14:19:14.775: INFO: (18) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:1080/proxy/: ... (200; 18.782573ms)
Jan 27 14:19:14.775: INFO: (18) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 18.865446ms)
Jan 27 14:19:14.775: INFO: (18) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 18.741835ms)
Jan 27 14:19:14.776: INFO: (18) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname2/proxy/: bar (200; 19.708003ms)
Jan 27 14:19:14.776: INFO: (18) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname1/proxy/: tls baz (200; 19.581826ms)
Jan 27 14:19:14.776: INFO: (18) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname1/proxy/: foo (200; 20.247411ms)
Jan 27 14:19:14.777: INFO: (18) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname1/proxy/: foo (200; 20.334495ms)
Jan 27 14:19:14.777: INFO: (18) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname2/proxy/: tls qux (200; 20.951925ms)
Jan 27 14:19:14.784: INFO: (19) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:1080/proxy/: test<... (200; 6.419366ms)
Jan 27 14:19:14.785: INFO: (19) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:443/proxy/: test (200; 8.108165ms)
Jan 27 14:19:14.791: INFO: (19) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:162/proxy/: bar (200; 13.062024ms)
Jan 27 14:19:14.791: INFO: (19) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname1/proxy/: foo (200; 13.902904ms)
Jan 27 14:19:14.791: INFO: (19) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:1080/proxy/: ... (200; 14.019655ms)
Jan 27 14:19:14.792: INFO: (19) /api/v1/namespaces/proxy-2074/services/proxy-service-hcllh:portname2/proxy/: bar (200; 13.890795ms)
Jan 27 14:19:14.792: INFO: (19) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:462/proxy/: tls qux (200; 14.244ms)
Jan 27 14:19:14.792: INFO: (19) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname1/proxy/: tls baz (200; 14.281225ms)
Jan 27 14:19:14.792: INFO: (19) /api/v1/namespaces/proxy-2074/pods/proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 14.406911ms)
Jan 27 14:19:14.792: INFO: (19) /api/v1/namespaces/proxy-2074/pods/https:proxy-service-hcllh-l2p7p:460/proxy/: tls baz (200; 14.590451ms)
Jan 27 14:19:14.793: INFO: (19) /api/v1/namespaces/proxy-2074/pods/http:proxy-service-hcllh-l2p7p:160/proxy/: foo (200; 15.240044ms)
Jan 27 14:19:14.793: INFO: (19) /api/v1/namespaces/proxy-2074/services/https:proxy-service-hcllh:tlsportname2/proxy/: tls qux (200; 15.414164ms)
Jan 27 14:19:14.793: INFO: (19) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname1/proxy/: foo (200; 15.573459ms)
Jan 27 14:19:14.796: INFO: (19) /api/v1/namespaces/proxy-2074/services/http:proxy-service-hcllh:portname2/proxy/: bar (200; 18.862206ms)
STEP: deleting ReplicationController proxy-service-hcllh in namespace proxy-2074, will wait for the garbage collector to delete the pods
Jan 27 14:19:14.865: INFO: Deleting ReplicationController proxy-service-hcllh took: 14.263173ms
Jan 27 14:19:15.166: INFO: Terminating ReplicationController proxy-service-hcllh pods took: 300.659002ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:19:20.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2074" for this suite.
Jan 27 14:19:26.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:19:27.054: INFO: namespace proxy-2074 deletion completed in 6.169954956s

• [SLOW TEST:23.899 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:19:27.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Jan 27 14:19:27.728: INFO: created pod pod-service-account-defaultsa
Jan 27 14:19:27.728: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 27 14:19:27.738: INFO: created pod pod-service-account-mountsa
Jan 27 14:19:27.738: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 27 14:19:27.757: INFO: created pod pod-service-account-nomountsa
Jan 27 14:19:27.757: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 27 14:19:27.804: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 27 14:19:27.804: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 27 14:19:27.880: INFO: created pod pod-service-account-mountsa-mountspec
Jan 27 14:19:27.880: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 27 14:19:27.915: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 27 14:19:27.915: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 27 14:19:27.941: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 27 14:19:27.941: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 27 14:19:28.093: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 27 14:19:28.093: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 27 14:19:28.127: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 27 14:19:28.127: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:19:28.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2445" for this suite.
Jan 27 14:20:06.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:20:06.704: INFO: namespace svcaccounts-2445 deletion completed in 37.702691147s

• [SLOW TEST:39.649 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:20:06.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 27 14:20:06.802: INFO: Creating deployment "test-recreate-deployment"
Jan 27 14:20:06.810: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 27 14:20:06.860: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan 27 14:20:08.882: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 27 14:20:08.891: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:20:10.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:20:12.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:20:14.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:20:16.899: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 27 14:20:16.920: INFO: Updating deployment test-recreate-deployment
Jan 27 14:20:16.920: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 27 14:20:17.266: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-904,SelfLink:/apis/apps/v1/namespaces/deployment-904/deployments/test-recreate-deployment,UID:de49923a-ff49-400b-9e04-ea189a1bc359,ResourceVersion:22071673,Generation:2,CreationTimestamp:2020-01-27 14:20:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-27 14:20:17 +0000 UTC 2020-01-27 14:20:17 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-27 14:20:17 +0000 UTC 2020-01-27 14:20:06 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 27 14:20:17.270: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-904,SelfLink:/apis/apps/v1/namespaces/deployment-904/replicasets/test-recreate-deployment-5c8c9cc69d,UID:5d5b2ccc-e3a6-4b3f-86d9-e67652b475d4,ResourceVersion:22071672,Generation:1,CreationTimestamp:2020-01-27 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment de49923a-ff49-400b-9e04-ea189a1bc359 0xc00063b317 0xc00063b318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 27 14:20:17.270: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 27 14:20:17.270: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-904,SelfLink:/apis/apps/v1/namespaces/deployment-904/replicasets/test-recreate-deployment-6df85df6b9,UID:23670c4f-ad56-4e23-994f-d007e5897289,ResourceVersion:22071663,Generation:2,CreationTimestamp:2020-01-27 14:20:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment de49923a-ff49-400b-9e04-ea189a1bc359 0xc00063b467 0xc00063b468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 27 14:20:17.345: INFO: Pod "test-recreate-deployment-5c8c9cc69d-r9hwj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-r9hwj,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-904,SelfLink:/api/v1/namespaces/deployment-904/pods/test-recreate-deployment-5c8c9cc69d-r9hwj,UID:b6e0da83-1b8c-4fff-b110-b4d740784cf1,ResourceVersion:22071671,Generation:0,CreationTimestamp:2020-01-27 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 5d5b2ccc-e3a6-4b3f-86d9-e67652b475d4 0xc001a6c3f7 0xc001a6c3f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nwwmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nwwmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nwwmn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a6c470} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a6c490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:20:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:20:17.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-904" for this suite.
Jan 27 14:20:23.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:20:23.535: INFO: namespace deployment-904 deletion completed in 6.177050814s

• [SLOW TEST:16.831 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:20:23.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Jan 27 14:20:23.617: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 27 14:20:23.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4859'
Jan 27 14:20:25.561: INFO: stderr: ""
Jan 27 14:20:25.561: INFO: stdout: "service/redis-slave created\n"
Jan 27 14:20:25.561: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 27 14:20:25.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4859'
Jan 27 14:20:26.049: INFO: stderr: ""
Jan 27 14:20:26.049: INFO: stdout: "service/redis-master created\n"
Jan 27 14:20:26.050: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 27 14:20:26.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4859'
Jan 27 14:20:26.547: INFO: stderr: ""
Jan 27 14:20:26.547: INFO: stdout: "service/frontend created\n"
Jan 27 14:20:26.548: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 27 14:20:26.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4859'
Jan 27 14:20:26.932: INFO: stderr: ""
Jan 27 14:20:26.932: INFO: stdout: "deployment.apps/frontend created\n"
Jan 27 14:20:26.933: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 27 14:20:26.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4859'
Jan 27 14:20:27.398: INFO: stderr: ""
Jan 27 14:20:27.398: INFO: stdout: "deployment.apps/redis-master created\n"
Jan 27 14:20:27.399: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 27 14:20:27.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4859'
Jan 27 14:20:28.616: INFO: stderr: ""
Jan 27 14:20:28.616: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Jan 27 14:20:28.616: INFO: Waiting for all frontend pods to be Running.
Jan 27 14:20:58.668: INFO: Waiting for frontend to serve content.
Jan 27 14:20:58.774: INFO: Trying to add a new entry to the guestbook.
Jan 27 14:20:58.832: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 27 14:20:58.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4859'
Jan 27 14:20:59.203: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 27 14:20:59.203: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 27 14:20:59.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4859'
Jan 27 14:20:59.400: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 27 14:20:59.400: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 27 14:20:59.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4859'
Jan 27 14:20:59.561: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 27 14:20:59.561: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 27 14:20:59.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4859'
Jan 27 14:20:59.699: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 27 14:20:59.700: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 27 14:20:59.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4859'
Jan 27 14:20:59.796: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 27 14:20:59.796: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 27 14:20:59.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4859'
Jan 27 14:20:59.938: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 27 14:20:59.938: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:20:59.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4859" for this suite.
Jan 27 14:21:52.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:21:52.286: INFO: namespace kubectl-4859 deletion completed in 52.277224423s

• [SLOW TEST:88.751 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:21:52.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-zgtf
STEP: Creating a pod to test atomic-volume-subpath
Jan 27 14:21:52.450: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zgtf" in namespace "subpath-3604" to be "success or failure"
Jan 27 14:21:52.461: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.563739ms
Jan 27 14:21:54.480: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028978822s
Jan 27 14:21:56.493: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042014289s
Jan 27 14:21:58.508: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057211978s
Jan 27 14:22:00.517: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065946723s
Jan 27 14:22:02.534: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Running", Reason="", readiness=true. Elapsed: 10.083225728s
Jan 27 14:22:04.564: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Running", Reason="", readiness=true. Elapsed: 12.113236236s
Jan 27 14:22:06.579: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Running", Reason="", readiness=true. Elapsed: 14.128307182s
Jan 27 14:22:08.589: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Running", Reason="", readiness=true. Elapsed: 16.138593814s
Jan 27 14:22:10.604: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Running", Reason="", readiness=true. Elapsed: 18.153257984s
Jan 27 14:22:12.643: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Running", Reason="", readiness=true. Elapsed: 20.191966816s
Jan 27 14:22:14.651: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Running", Reason="", readiness=true. Elapsed: 22.200457537s
Jan 27 14:22:16.665: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Running", Reason="", readiness=true. Elapsed: 24.214420611s
Jan 27 14:22:18.676: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Running", Reason="", readiness=true. Elapsed: 26.225557165s
Jan 27 14:22:20.725: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Running", Reason="", readiness=true. Elapsed: 28.274189347s
Jan 27 14:22:22.730: INFO: Pod "pod-subpath-test-secret-zgtf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.27978852s
STEP: Saw pod success
Jan 27 14:22:22.730: INFO: Pod "pod-subpath-test-secret-zgtf" satisfied condition "success or failure"
Jan 27 14:22:22.733: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-zgtf container test-container-subpath-secret-zgtf: 
STEP: delete the pod
Jan 27 14:22:22.879: INFO: Waiting for pod pod-subpath-test-secret-zgtf to disappear
Jan 27 14:22:22.893: INFO: Pod pod-subpath-test-secret-zgtf no longer exists
STEP: Deleting pod pod-subpath-test-secret-zgtf
Jan 27 14:22:22.893: INFO: Deleting pod "pod-subpath-test-secret-zgtf" in namespace "subpath-3604"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:22:22.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3604" for this suite.
Jan 27 14:22:28.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:22:29.049: INFO: namespace subpath-3604 deletion completed in 6.140521772s

• [SLOW TEST:36.762 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:22:29.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 27 14:22:29.221: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 27 14:22:29.237: INFO: Waiting for terminating namespaces to be deleted...
Jan 27 14:22:29.240: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 27 14:22:29.250: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 27 14:22:29.250: INFO: 	Container weave ready: true, restart count 0
Jan 27 14:22:29.250: INFO: 	Container weave-npc ready: true, restart count 0
Jan 27 14:22:29.250: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 27 14:22:29.250: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 27 14:22:29.250: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 27 14:22:29.259: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 27 14:22:29.259: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 27 14:22:29.259: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 27 14:22:29.259: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 27 14:22:29.259: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 27 14:22:29.259: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 27 14:22:29.259: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 27 14:22:29.259: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 27 14:22:29.259: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 27 14:22:29.259: INFO: 	Container coredns ready: true, restart count 0
Jan 27 14:22:29.259: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 27 14:22:29.259: INFO: 	Container etcd ready: true, restart count 0
Jan 27 14:22:29.259: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 27 14:22:29.259: INFO: 	Container weave ready: true, restart count 0
Jan 27 14:22:29.259: INFO: 	Container weave-npc ready: true, restart count 0
Jan 27 14:22:29.259: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 27 14:22:29.259: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15edc45d4d02eae0], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:22:30.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1766" for this suite.
Jan 27 14:22:36.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:22:36.415: INFO: namespace sched-pred-1766 deletion completed in 6.118281893s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.366 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:22:36.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-234eb28d-1029-468b-b04b-c3747d8e6b07
STEP: Creating a pod to test consume secrets
Jan 27 14:22:36.522: INFO: Waiting up to 5m0s for pod "pod-secrets-d0d164ed-18a3-49da-8312-309a945bd528" in namespace "secrets-3429" to be "success or failure"
Jan 27 14:22:36.543: INFO: Pod "pod-secrets-d0d164ed-18a3-49da-8312-309a945bd528": Phase="Pending", Reason="", readiness=false. Elapsed: 20.407289ms
Jan 27 14:22:38.557: INFO: Pod "pod-secrets-d0d164ed-18a3-49da-8312-309a945bd528": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034352493s
Jan 27 14:22:41.299: INFO: Pod "pod-secrets-d0d164ed-18a3-49da-8312-309a945bd528": Phase="Pending", Reason="", readiness=false. Elapsed: 4.776502766s
Jan 27 14:22:43.308: INFO: Pod "pod-secrets-d0d164ed-18a3-49da-8312-309a945bd528": Phase="Pending", Reason="", readiness=false. Elapsed: 6.78586966s
Jan 27 14:22:45.316: INFO: Pod "pod-secrets-d0d164ed-18a3-49da-8312-309a945bd528": Phase="Pending", Reason="", readiness=false. Elapsed: 8.793785855s
Jan 27 14:22:47.331: INFO: Pod "pod-secrets-d0d164ed-18a3-49da-8312-309a945bd528": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.808203229s
STEP: Saw pod success
Jan 27 14:22:47.331: INFO: Pod "pod-secrets-d0d164ed-18a3-49da-8312-309a945bd528" satisfied condition "success or failure"
Jan 27 14:22:47.343: INFO: Trying to get logs from node iruya-node pod pod-secrets-d0d164ed-18a3-49da-8312-309a945bd528 container secret-volume-test: 
STEP: delete the pod
Jan 27 14:22:47.397: INFO: Waiting for pod pod-secrets-d0d164ed-18a3-49da-8312-309a945bd528 to disappear
Jan 27 14:22:47.406: INFO: Pod pod-secrets-d0d164ed-18a3-49da-8312-309a945bd528 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:22:47.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3429" for this suite.
Jan 27 14:22:53.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:22:53.659: INFO: namespace secrets-3429 deletion completed in 6.248805411s

• [SLOW TEST:17.243 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:22:53.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 27 14:23:04.445: INFO: Successfully updated pod "annotationupdatec0fcb844-c57b-4311-9cc3-17fd2692044a"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:23:06.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1001" for this suite.
Jan 27 14:23:36.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:23:36.699: INFO: namespace downward-api-1001 deletion completed in 30.127394575s

• [SLOW TEST:43.039 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:23:36.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 27 14:23:36.802: INFO: Waiting up to 5m0s for pod "downward-api-43cf4230-4156-4279-bc6f-b7db5ca7803e" in namespace "downward-api-3809" to be "success or failure"
Jan 27 14:23:36.810: INFO: Pod "downward-api-43cf4230-4156-4279-bc6f-b7db5ca7803e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.905496ms
Jan 27 14:23:38.817: INFO: Pod "downward-api-43cf4230-4156-4279-bc6f-b7db5ca7803e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015383168s
Jan 27 14:23:40.833: INFO: Pod "downward-api-43cf4230-4156-4279-bc6f-b7db5ca7803e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031071553s
Jan 27 14:23:42.862: INFO: Pod "downward-api-43cf4230-4156-4279-bc6f-b7db5ca7803e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059659965s
Jan 27 14:23:44.872: INFO: Pod "downward-api-43cf4230-4156-4279-bc6f-b7db5ca7803e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069742958s
Jan 27 14:23:46.880: INFO: Pod "downward-api-43cf4230-4156-4279-bc6f-b7db5ca7803e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078517763s
STEP: Saw pod success
Jan 27 14:23:46.880: INFO: Pod "downward-api-43cf4230-4156-4279-bc6f-b7db5ca7803e" satisfied condition "success or failure"
Jan 27 14:23:46.886: INFO: Trying to get logs from node iruya-node pod downward-api-43cf4230-4156-4279-bc6f-b7db5ca7803e container dapi-container: 
STEP: delete the pod
Jan 27 14:23:47.054: INFO: Waiting for pod downward-api-43cf4230-4156-4279-bc6f-b7db5ca7803e to disappear
Jan 27 14:23:47.073: INFO: Pod downward-api-43cf4230-4156-4279-bc6f-b7db5ca7803e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:23:47.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3809" for this suite.
Jan 27 14:23:53.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:23:53.330: INFO: namespace downward-api-3809 deletion completed in 6.241633415s

• [SLOW TEST:16.631 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:23:53.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-9905/configmap-test-bafb37ba-879d-421b-b064-6b39c3e426ca
STEP: Creating a pod to test consume configMaps
Jan 27 14:23:53.511: INFO: Waiting up to 5m0s for pod "pod-configmaps-5aa2d264-1312-44dd-adfd-667a3b2009d6" in namespace "configmap-9905" to be "success or failure"
Jan 27 14:23:53.514: INFO: Pod "pod-configmaps-5aa2d264-1312-44dd-adfd-667a3b2009d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.047409ms
Jan 27 14:23:55.524: INFO: Pod "pod-configmaps-5aa2d264-1312-44dd-adfd-667a3b2009d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013566999s
Jan 27 14:23:57.553: INFO: Pod "pod-configmaps-5aa2d264-1312-44dd-adfd-667a3b2009d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042554383s
Jan 27 14:23:59.561: INFO: Pod "pod-configmaps-5aa2d264-1312-44dd-adfd-667a3b2009d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05064152s
Jan 27 14:24:01.568: INFO: Pod "pod-configmaps-5aa2d264-1312-44dd-adfd-667a3b2009d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056979767s
Jan 27 14:24:03.576: INFO: Pod "pod-configmaps-5aa2d264-1312-44dd-adfd-667a3b2009d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06528667s
STEP: Saw pod success
Jan 27 14:24:03.576: INFO: Pod "pod-configmaps-5aa2d264-1312-44dd-adfd-667a3b2009d6" satisfied condition "success or failure"
Jan 27 14:24:03.579: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5aa2d264-1312-44dd-adfd-667a3b2009d6 container env-test: 
STEP: delete the pod
Jan 27 14:24:03.743: INFO: Waiting for pod pod-configmaps-5aa2d264-1312-44dd-adfd-667a3b2009d6 to disappear
Jan 27 14:24:03.751: INFO: Pod pod-configmaps-5aa2d264-1312-44dd-adfd-667a3b2009d6 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:24:03.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9905" for this suite.
Jan 27 14:24:09.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:24:09.961: INFO: namespace configmap-9905 deletion completed in 6.199583852s

• [SLOW TEST:16.630 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:24:09.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 27 14:24:10.067: INFO: Waiting up to 5m0s for pod "pod-9fa1d3c9-b321-4c1a-9ab0-9ef69df13f18" in namespace "emptydir-9975" to be "success or failure"
Jan 27 14:24:10.076: INFO: Pod "pod-9fa1d3c9-b321-4c1a-9ab0-9ef69df13f18": Phase="Pending", Reason="", readiness=false. Elapsed: 9.402244ms
Jan 27 14:24:12.082: INFO: Pod "pod-9fa1d3c9-b321-4c1a-9ab0-9ef69df13f18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015445832s
Jan 27 14:24:14.094: INFO: Pod "pod-9fa1d3c9-b321-4c1a-9ab0-9ef69df13f18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026969525s
Jan 27 14:24:16.104: INFO: Pod "pod-9fa1d3c9-b321-4c1a-9ab0-9ef69df13f18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037104028s
Jan 27 14:24:18.114: INFO: Pod "pod-9fa1d3c9-b321-4c1a-9ab0-9ef69df13f18": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046774684s
Jan 27 14:24:20.122: INFO: Pod "pod-9fa1d3c9-b321-4c1a-9ab0-9ef69df13f18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054562315s
STEP: Saw pod success
Jan 27 14:24:20.122: INFO: Pod "pod-9fa1d3c9-b321-4c1a-9ab0-9ef69df13f18" satisfied condition "success or failure"
Jan 27 14:24:20.126: INFO: Trying to get logs from node iruya-node pod pod-9fa1d3c9-b321-4c1a-9ab0-9ef69df13f18 container test-container: 
STEP: delete the pod
Jan 27 14:24:20.291: INFO: Waiting for pod pod-9fa1d3c9-b321-4c1a-9ab0-9ef69df13f18 to disappear
Jan 27 14:24:20.306: INFO: Pod pod-9fa1d3c9-b321-4c1a-9ab0-9ef69df13f18 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:24:20.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9975" for this suite.
Jan 27 14:24:26.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:24:26.513: INFO: namespace emptydir-9975 deletion completed in 6.195949141s

• [SLOW TEST:16.552 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:24:26.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 27 14:24:26.736: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ca1264b-743c-4773-9772-418557e02c82" in namespace "projected-3304" to be "success or failure"
Jan 27 14:24:26.790: INFO: Pod "downwardapi-volume-7ca1264b-743c-4773-9772-418557e02c82": Phase="Pending", Reason="", readiness=false. Elapsed: 54.780886ms
Jan 27 14:24:28.888: INFO: Pod "downwardapi-volume-7ca1264b-743c-4773-9772-418557e02c82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152559839s
Jan 27 14:24:30.897: INFO: Pod "downwardapi-volume-7ca1264b-743c-4773-9772-418557e02c82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161001189s
Jan 27 14:24:32.908: INFO: Pod "downwardapi-volume-7ca1264b-743c-4773-9772-418557e02c82": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172197904s
Jan 27 14:24:34.916: INFO: Pod "downwardapi-volume-7ca1264b-743c-4773-9772-418557e02c82": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179968118s
Jan 27 14:24:36.957: INFO: Pod "downwardapi-volume-7ca1264b-743c-4773-9772-418557e02c82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.22127237s
STEP: Saw pod success
Jan 27 14:24:36.957: INFO: Pod "downwardapi-volume-7ca1264b-743c-4773-9772-418557e02c82" satisfied condition "success or failure"
Jan 27 14:24:36.963: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7ca1264b-743c-4773-9772-418557e02c82 container client-container: 
STEP: delete the pod
Jan 27 14:24:37.023: INFO: Waiting for pod downwardapi-volume-7ca1264b-743c-4773-9772-418557e02c82 to disappear
Jan 27 14:24:37.032: INFO: Pod downwardapi-volume-7ca1264b-743c-4773-9772-418557e02c82 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:24:37.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3304" for this suite.
Jan 27 14:24:43.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:24:43.245: INFO: namespace projected-3304 deletion completed in 6.206838099s

• [SLOW TEST:16.729 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:24:43.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 27 14:24:49.721: INFO: 10 pods remaining
Jan 27 14:24:49.721: INFO: 10 pods has nil DeletionTimestamp
Jan 27 14:24:49.721: INFO: 
Jan 27 14:24:51.480: INFO: 9 pods remaining
Jan 27 14:24:51.480: INFO: 0 pods has nil DeletionTimestamp
Jan 27 14:24:51.480: INFO: 
Jan 27 14:24:51.841: INFO: 0 pods remaining
Jan 27 14:24:51.841: INFO: 0 pods has nil DeletionTimestamp
Jan 27 14:24:51.841: INFO: 
STEP: Gathering metrics
W0127 14:24:52.793395       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 27 14:24:52.793: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:24:52.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-615" for this suite.
Jan 27 14:25:04.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:25:04.945: INFO: namespace gc-615 deletion completed in 12.127868182s

• [SLOW TEST:21.700 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:25:04.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan 27 14:25:17.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-8d4c8438-a6a6-4c77-83e1-ac79902085a0 -c busybox-main-container --namespace=emptydir-5673 -- cat /usr/share/volumeshare/shareddata.txt'
Jan 27 14:25:17.522: INFO: stderr: "I0127 14:25:17.234611    2627 log.go:172] (0xc000ac2420) (0xc00053e8c0) Create stream\nI0127 14:25:17.235116    2627 log.go:172] (0xc000ac2420) (0xc00053e8c0) Stream added, broadcasting: 1\nI0127 14:25:17.244868    2627 log.go:172] (0xc000ac2420) Reply frame received for 1\nI0127 14:25:17.244947    2627 log.go:172] (0xc000ac2420) (0xc00053e000) Create stream\nI0127 14:25:17.244962    2627 log.go:172] (0xc000ac2420) (0xc00053e000) Stream added, broadcasting: 3\nI0127 14:25:17.247317    2627 log.go:172] (0xc000ac2420) Reply frame received for 3\nI0127 14:25:17.247374    2627 log.go:172] (0xc000ac2420) (0xc0004741e0) Create stream\nI0127 14:25:17.247385    2627 log.go:172] (0xc000ac2420) (0xc0004741e0) Stream added, broadcasting: 5\nI0127 14:25:17.248889    2627 log.go:172] (0xc000ac2420) Reply frame received for 5\nI0127 14:25:17.363213    2627 log.go:172] (0xc000ac2420) Data frame received for 3\nI0127 14:25:17.363437    2627 log.go:172] (0xc00053e000) (3) Data frame handling\nI0127 14:25:17.363515    2627 log.go:172] (0xc00053e000) (3) Data frame sent\nI0127 14:25:17.513294    2627 log.go:172] (0xc000ac2420) Data frame received for 1\nI0127 14:25:17.513428    2627 log.go:172] (0xc000ac2420) (0xc00053e000) Stream removed, broadcasting: 3\nI0127 14:25:17.513482    2627 log.go:172] (0xc00053e8c0) (1) Data frame handling\nI0127 14:25:17.513501    2627 log.go:172] (0xc00053e8c0) (1) Data frame sent\nI0127 14:25:17.513517    2627 log.go:172] (0xc000ac2420) (0xc0004741e0) Stream removed, broadcasting: 5\nI0127 14:25:17.513530    2627 log.go:172] (0xc000ac2420) (0xc00053e8c0) Stream removed, broadcasting: 1\nI0127 14:25:17.513539    2627 log.go:172] (0xc000ac2420) Go away received\nI0127 14:25:17.514137    2627 log.go:172] (0xc000ac2420) (0xc00053e8c0) Stream removed, broadcasting: 1\nI0127 14:25:17.514176    2627 log.go:172] (0xc000ac2420) (0xc00053e000) Stream removed, broadcasting: 3\nI0127 14:25:17.514184    2627 log.go:172] (0xc000ac2420) (0xc0004741e0) Stream removed, broadcasting: 5\n"
Jan 27 14:25:17.522: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:25:17.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5673" for this suite.
Jan 27 14:25:23.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:25:23.724: INFO: namespace emptydir-5673 deletion completed in 6.175453244s

• [SLOW TEST:18.779 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:25:23.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 27 14:25:49.144: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:25:50.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5137" for this suite.
Jan 27 14:26:14.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:26:14.441: INFO: namespace replicaset-5137 deletion completed in 24.149472845s

• [SLOW TEST:50.715 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:26:14.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan 27 14:26:14.656: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jan 27 14:26:15.502: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan 27 14:26:17.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:26:19.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:26:21.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:26:23.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:26:25.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:26:28.604: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:26:29.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:26:31.823: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715731975, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:26:34.812: INFO: Waited 968.256851ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:26:36.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-9346" for this suite.
Jan 27 14:26:44.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:26:44.575: INFO: namespace aggregator-9346 deletion completed in 8.232133375s

• [SLOW TEST:30.133 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:26:44.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:26:44.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5198" for this suite.
Jan 27 14:27:07.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:27:07.117: INFO: namespace kubelet-test-5198 deletion completed in 22.151573139s

• [SLOW TEST:22.541 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:27:07.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-f4c40803-0008-4e40-ab0f-fc0a4a58f449
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-f4c40803-0008-4e40-ab0f-fc0a4a58f449
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:27:23.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1380" for this suite.
Jan 27 14:27:45.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:27:45.533: INFO: namespace configmap-1380 deletion completed in 22.128364986s

• [SLOW TEST:38.416 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:27:45.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0127 14:28:06.097979       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 27 14:28:06.098: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:28:06.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7580" for this suite.
Jan 27 14:28:31.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:28:32.039: INFO: namespace gc-7580 deletion completed in 25.934677129s

• [SLOW TEST:46.506 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:28:32.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 27 14:28:32.250: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1816532-8ea7-4e45-9c16-96291911f998" in namespace "downward-api-1905" to be "success or failure"
Jan 27 14:28:32.376: INFO: Pod "downwardapi-volume-d1816532-8ea7-4e45-9c16-96291911f998": Phase="Pending", Reason="", readiness=false. Elapsed: 125.864291ms
Jan 27 14:28:34.400: INFO: Pod "downwardapi-volume-d1816532-8ea7-4e45-9c16-96291911f998": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149767326s
Jan 27 14:28:36.433: INFO: Pod "downwardapi-volume-d1816532-8ea7-4e45-9c16-96291911f998": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183123668s
Jan 27 14:28:38.443: INFO: Pod "downwardapi-volume-d1816532-8ea7-4e45-9c16-96291911f998": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193277146s
Jan 27 14:28:40.477: INFO: Pod "downwardapi-volume-d1816532-8ea7-4e45-9c16-96291911f998": Phase="Pending", Reason="", readiness=false. Elapsed: 8.22682924s
Jan 27 14:28:43.412: INFO: Pod "downwardapi-volume-d1816532-8ea7-4e45-9c16-96291911f998": Phase="Pending", Reason="", readiness=false. Elapsed: 11.162024039s
Jan 27 14:28:45.426: INFO: Pod "downwardapi-volume-d1816532-8ea7-4e45-9c16-96291911f998": Phase="Pending", Reason="", readiness=false. Elapsed: 13.176640659s
Jan 27 14:28:47.460: INFO: Pod "downwardapi-volume-d1816532-8ea7-4e45-9c16-96291911f998": Phase="Pending", Reason="", readiness=false. Elapsed: 15.21010391s
Jan 27 14:28:49.475: INFO: Pod "downwardapi-volume-d1816532-8ea7-4e45-9c16-96291911f998": Phase="Pending", Reason="", readiness=false. Elapsed: 17.225421248s
Jan 27 14:28:51.486: INFO: Pod "downwardapi-volume-d1816532-8ea7-4e45-9c16-96291911f998": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.236578302s
STEP: Saw pod success
Jan 27 14:28:51.487: INFO: Pod "downwardapi-volume-d1816532-8ea7-4e45-9c16-96291911f998" satisfied condition "success or failure"
Jan 27 14:28:51.493: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d1816532-8ea7-4e45-9c16-96291911f998 container client-container: 
STEP: delete the pod
Jan 27 14:28:51.598: INFO: Waiting for pod downwardapi-volume-d1816532-8ea7-4e45-9c16-96291911f998 to disappear
Jan 27 14:28:51.667: INFO: Pod downwardapi-volume-d1816532-8ea7-4e45-9c16-96291911f998 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:28:51.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1905" for this suite.
Jan 27 14:28:57.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:28:57.931: INFO: namespace downward-api-1905 deletion completed in 6.206736032s

• [SLOW TEST:25.891 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:28:57.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-564a9512-6e23-4032-baeb-b9cbdbcd5da7
STEP: Creating a pod to test consume configMaps
Jan 27 14:28:58.212: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6e72db07-c277-4f18-b8ce-08027d9df983" in namespace "projected-3414" to be "success or failure"
Jan 27 14:28:58.220: INFO: Pod "pod-projected-configmaps-6e72db07-c277-4f18-b8ce-08027d9df983": Phase="Pending", Reason="", readiness=false. Elapsed: 7.744369ms
Jan 27 14:29:00.912: INFO: Pod "pod-projected-configmaps-6e72db07-c277-4f18-b8ce-08027d9df983": Phase="Pending", Reason="", readiness=false. Elapsed: 2.699706845s
Jan 27 14:29:02.919: INFO: Pod "pod-projected-configmaps-6e72db07-c277-4f18-b8ce-08027d9df983": Phase="Pending", Reason="", readiness=false. Elapsed: 4.706517334s
Jan 27 14:29:04.929: INFO: Pod "pod-projected-configmaps-6e72db07-c277-4f18-b8ce-08027d9df983": Phase="Pending", Reason="", readiness=false. Elapsed: 6.716654958s
Jan 27 14:29:06.936: INFO: Pod "pod-projected-configmaps-6e72db07-c277-4f18-b8ce-08027d9df983": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723453883s
Jan 27 14:29:08.945: INFO: Pod "pod-projected-configmaps-6e72db07-c277-4f18-b8ce-08027d9df983": Phase="Pending", Reason="", readiness=false. Elapsed: 10.73257953s
Jan 27 14:29:10.955: INFO: Pod "pod-projected-configmaps-6e72db07-c277-4f18-b8ce-08027d9df983": Phase="Pending", Reason="", readiness=false. Elapsed: 12.742413104s
Jan 27 14:29:12.977: INFO: Pod "pod-projected-configmaps-6e72db07-c277-4f18-b8ce-08027d9df983": Phase="Pending", Reason="", readiness=false. Elapsed: 14.764003787s
Jan 27 14:29:14.984: INFO: Pod "pod-projected-configmaps-6e72db07-c277-4f18-b8ce-08027d9df983": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.771076252s
STEP: Saw pod success
Jan 27 14:29:14.984: INFO: Pod "pod-projected-configmaps-6e72db07-c277-4f18-b8ce-08027d9df983" satisfied condition "success or failure"
Jan 27 14:29:14.988: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-6e72db07-c277-4f18-b8ce-08027d9df983 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 27 14:29:15.033: INFO: Waiting for pod pod-projected-configmaps-6e72db07-c277-4f18-b8ce-08027d9df983 to disappear
Jan 27 14:29:15.309: INFO: Pod pod-projected-configmaps-6e72db07-c277-4f18-b8ce-08027d9df983 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:29:15.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3414" for this suite.
Jan 27 14:29:21.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:29:21.525: INFO: namespace projected-3414 deletion completed in 6.198022853s

• [SLOW TEST:23.590 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:29:21.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 27 14:29:21.737: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 27 14:29:26.753: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 27 14:29:36.783: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 27 14:29:36.951: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2165,SelfLink:/apis/apps/v1/namespaces/deployment-2165/deployments/test-cleanup-deployment,UID:83c8bfb1-4c41-454e-b7ae-66010f1fa6bc,ResourceVersion:22073269,Generation:1,CreationTimestamp:2020-01-27 14:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 27 14:29:36.968: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2165,SelfLink:/apis/apps/v1/namespaces/deployment-2165/replicasets/test-cleanup-deployment-55bbcbc84c,UID:95106c79-840b-43a8-a51e-7e710112ac2d,ResourceVersion:22073271,Generation:1,CreationTimestamp:2020-01-27 14:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 83c8bfb1-4c41-454e-b7ae-66010f1fa6bc 0xc000f76037 0xc000f76038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 27 14:29:36.968: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan 27 14:29:36.968: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2165,SelfLink:/apis/apps/v1/namespaces/deployment-2165/replicasets/test-cleanup-controller,UID:ebe31ec1-d8cc-4cb5-b844-812469011df1,ResourceVersion:22073270,Generation:1,CreationTimestamp:2020-01-27 14:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 83c8bfb1-4c41-454e-b7ae-66010f1fa6bc 0xc000485c87 0xc000485c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 27 14:29:37.140: INFO: Pod "test-cleanup-controller-qzkxg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-qzkxg,GenerateName:test-cleanup-controller-,Namespace:deployment-2165,SelfLink:/api/v1/namespaces/deployment-2165/pods/test-cleanup-controller-qzkxg,UID:592f3682-1a90-4fdf-86be-b5b0b6b59e0f,ResourceVersion:22073266,Generation:0,CreationTimestamp:2020-01-27 14:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller ebe31ec1-d8cc-4cb5-b844-812469011df1 0xc000f76977 0xc000f76978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vp24n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vp24n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vp24n true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f769f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f76a10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:29:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:29:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:29:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:29:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-27 14:29:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 14:29:34 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://68067c66b9cff3cd0c6da762aab77bc7e5e2da204bf8202b080c61954df6d74e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 14:29:37.140: INFO: Pod "test-cleanup-deployment-55bbcbc84c-v4xth" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-v4xth,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2165,SelfLink:/api/v1/namespaces/deployment-2165/pods/test-cleanup-deployment-55bbcbc84c-v4xth,UID:5c7ded19-b166-4d57-a39d-7f7b4320eb5d,ResourceVersion:22073277,Generation:0,CreationTimestamp:2020-01-27 14:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 95106c79-840b-43a8-a51e-7e710112ac2d 0xc000f76af7 0xc000f76af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vp24n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vp24n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-vp24n true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f76b70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f76b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:29:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:29:37.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2165" for this suite.
Jan 27 14:29:43.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:29:43.625: INFO: namespace deployment-2165 deletion completed in 6.451786733s

• [SLOW TEST:22.100 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:29:43.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Jan 27 14:29:43.934: INFO: Waiting up to 5m0s for pod "client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109" in namespace "containers-4418" to be "success or failure"
Jan 27 14:29:43.961: INFO: Pod "client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109": Phase="Pending", Reason="", readiness=false. Elapsed: 26.698719ms
Jan 27 14:29:45.974: INFO: Pod "client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039311027s
Jan 27 14:29:47.982: INFO: Pod "client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047352179s
Jan 27 14:29:49.989: INFO: Pod "client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054065317s
Jan 27 14:29:51.997: INFO: Pod "client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062444816s
Jan 27 14:29:54.016: INFO: Pod "client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109": Phase="Pending", Reason="", readiness=false. Elapsed: 10.08094359s
Jan 27 14:29:56.053: INFO: Pod "client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109": Phase="Pending", Reason="", readiness=false. Elapsed: 12.118628281s
Jan 27 14:29:58.089: INFO: Pod "client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109": Phase="Pending", Reason="", readiness=false. Elapsed: 14.154173945s
Jan 27 14:30:00.103: INFO: Pod "client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109": Phase="Pending", Reason="", readiness=false. Elapsed: 16.168438524s
Jan 27 14:30:02.117: INFO: Pod "client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109": Phase="Pending", Reason="", readiness=false. Elapsed: 18.182521018s
Jan 27 14:30:05.454: INFO: Pod "client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109": Phase="Pending", Reason="", readiness=false. Elapsed: 21.519212405s
Jan 27 14:30:07.467: INFO: Pod "client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.532090422s
STEP: Saw pod success
Jan 27 14:30:07.467: INFO: Pod "client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109" satisfied condition "success or failure"
Jan 27 14:30:07.500: INFO: Trying to get logs from node iruya-node pod client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109 container test-container: 
STEP: delete the pod
Jan 27 14:30:07.699: INFO: Waiting for pod client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109 to disappear
Jan 27 14:30:07.704: INFO: Pod client-containers-12f0b953-5157-4437-bf62-1cdecbf5b109 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:30:07.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4418" for this suite.
Jan 27 14:30:13.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:30:13.948: INFO: namespace containers-4418 deletion completed in 6.238360253s

• [SLOW TEST:30.322 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:30:13.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-77612e91-ac98-4ca0-9b39-163b522062ba
STEP: Creating a pod to test consume configMaps
Jan 27 14:30:14.216: INFO: Waiting up to 5m0s for pod "pod-configmaps-ddc04353-4b3a-4369-b405-04c15ffbd256" in namespace "configmap-2002" to be "success or failure"
Jan 27 14:30:14.269: INFO: Pod "pod-configmaps-ddc04353-4b3a-4369-b405-04c15ffbd256": Phase="Pending", Reason="", readiness=false. Elapsed: 52.658537ms
Jan 27 14:30:16.281: INFO: Pod "pod-configmaps-ddc04353-4b3a-4369-b405-04c15ffbd256": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064344674s
Jan 27 14:30:18.290: INFO: Pod "pod-configmaps-ddc04353-4b3a-4369-b405-04c15ffbd256": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073164593s
Jan 27 14:30:20.297: INFO: Pod "pod-configmaps-ddc04353-4b3a-4369-b405-04c15ffbd256": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08071845s
Jan 27 14:30:22.327: INFO: Pod "pod-configmaps-ddc04353-4b3a-4369-b405-04c15ffbd256": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110360802s
Jan 27 14:30:24.334: INFO: Pod "pod-configmaps-ddc04353-4b3a-4369-b405-04c15ffbd256": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117703389s
Jan 27 14:30:26.343: INFO: Pod "pod-configmaps-ddc04353-4b3a-4369-b405-04c15ffbd256": Phase="Pending", Reason="", readiness=false. Elapsed: 12.126213425s
Jan 27 14:30:28.351: INFO: Pod "pod-configmaps-ddc04353-4b3a-4369-b405-04c15ffbd256": Phase="Pending", Reason="", readiness=false. Elapsed: 14.134601184s
Jan 27 14:30:30.365: INFO: Pod "pod-configmaps-ddc04353-4b3a-4369-b405-04c15ffbd256": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.148848295s
STEP: Saw pod success
Jan 27 14:30:30.365: INFO: Pod "pod-configmaps-ddc04353-4b3a-4369-b405-04c15ffbd256" satisfied condition "success or failure"
Jan 27 14:30:30.370: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ddc04353-4b3a-4369-b405-04c15ffbd256 container configmap-volume-test: 
STEP: delete the pod
Jan 27 14:30:30.432: INFO: Waiting for pod pod-configmaps-ddc04353-4b3a-4369-b405-04c15ffbd256 to disappear
Jan 27 14:30:30.597: INFO: Pod pod-configmaps-ddc04353-4b3a-4369-b405-04c15ffbd256 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:30:30.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2002" for this suite.
Jan 27 14:30:38.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:30:38.958: INFO: namespace configmap-2002 deletion completed in 8.346333611s

• [SLOW TEST:25.010 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:30:38.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 27 14:30:39.178: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.026133ms)
Jan 27 14:30:39.188: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.687302ms)
Jan 27 14:30:39.196: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.406551ms)
Jan 27 14:30:39.203: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.282383ms)
Jan 27 14:30:39.211: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.938578ms)
Jan 27 14:30:39.217: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.542386ms)
Jan 27 14:30:39.224: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.549866ms)
Jan 27 14:30:39.232: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.672374ms)
Jan 27 14:30:39.239: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.633766ms)
Jan 27 14:30:39.244: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.853688ms)
Jan 27 14:30:39.250: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.313886ms)
Jan 27 14:30:39.326: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 76.791888ms)
Jan 27 14:30:39.335: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.357365ms)
Jan 27 14:30:39.344: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.899487ms)
Jan 27 14:30:39.351: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.258286ms)
Jan 27 14:30:39.359: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.805268ms)
Jan 27 14:30:39.371: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.007621ms)
Jan 27 14:30:39.386: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.082758ms)
Jan 27 14:30:39.396: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.311736ms)
Jan 27 14:30:39.404: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.68343ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:30:39.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-520" for this suite.
Jan 27 14:30:45.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:30:45.648: INFO: namespace proxy-520 deletion completed in 6.239169254s

• [SLOW TEST:6.689 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:30:45.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jan 27 14:31:01.948: INFO: Pod pod-hostip-34c2c555-bfb3-463e-a2b0-da34c84d5ce4 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:31:01.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7832" for this suite.
Jan 27 14:31:41.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:31:42.080: INFO: namespace pods-7832 deletion completed in 40.127122114s

• [SLOW TEST:56.430 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:31:42.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 27 14:31:42.509: INFO: Number of nodes with available pods: 0
Jan 27 14:31:42.509: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:31:44.247: INFO: Number of nodes with available pods: 0
Jan 27 14:31:44.247: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:31:44.751: INFO: Number of nodes with available pods: 0
Jan 27 14:31:44.751: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:31:46.185: INFO: Number of nodes with available pods: 0
Jan 27 14:31:46.185: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:31:46.531: INFO: Number of nodes with available pods: 0
Jan 27 14:31:46.531: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:31:47.642: INFO: Number of nodes with available pods: 0
Jan 27 14:31:47.642: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:31:48.531: INFO: Number of nodes with available pods: 0
Jan 27 14:31:48.532: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:31:52.479: INFO: Number of nodes with available pods: 0
Jan 27 14:31:52.479: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:31:53.210: INFO: Number of nodes with available pods: 0
Jan 27 14:31:53.210: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:31:53.527: INFO: Number of nodes with available pods: 0
Jan 27 14:31:53.527: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:31:55.439: INFO: Number of nodes with available pods: 0
Jan 27 14:31:55.439: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:31:55.522: INFO: Number of nodes with available pods: 0
Jan 27 14:31:55.522: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:31:56.557: INFO: Number of nodes with available pods: 0
Jan 27 14:31:56.557: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:31:57.523: INFO: Number of nodes with available pods: 1
Jan 27 14:31:57.523: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:31:58.527: INFO: Number of nodes with available pods: 1
Jan 27 14:31:58.527: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:31:59.523: INFO: Number of nodes with available pods: 1
Jan 27 14:31:59.523: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:32:00.532: INFO: Number of nodes with available pods: 2
Jan 27 14:32:00.532: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 27 14:32:00.612: INFO: Number of nodes with available pods: 2
Jan 27 14:32:00.612: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1651, will wait for the garbage collector to delete the pods
Jan 27 14:32:01.712: INFO: Deleting DaemonSet.extensions daemon-set took: 16.353082ms
Jan 27 14:32:02.012: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.376498ms
Jan 27 14:32:16.827: INFO: Number of nodes with available pods: 0
Jan 27 14:32:16.827: INFO: Number of running nodes: 0, number of available pods: 0
Jan 27 14:32:16.838: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1651/daemonsets","resourceVersion":"22073642"},"items":null}

Jan 27 14:32:16.844: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1651/pods","resourceVersion":"22073642"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:32:16.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1651" for this suite.
Jan 27 14:32:24.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:32:25.023: INFO: namespace daemonsets-1651 deletion completed in 8.146166276s

• [SLOW TEST:42.943 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:32:25.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 27 14:32:25.184: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c3ed9bf-f93f-48d2-84b5-d7f959acb2f6" in namespace "downward-api-4259" to be "success or failure"
Jan 27 14:32:25.240: INFO: Pod "downwardapi-volume-8c3ed9bf-f93f-48d2-84b5-d7f959acb2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 55.866173ms
Jan 27 14:32:27.250: INFO: Pod "downwardapi-volume-8c3ed9bf-f93f-48d2-84b5-d7f959acb2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064938708s
Jan 27 14:32:29.258: INFO: Pod "downwardapi-volume-8c3ed9bf-f93f-48d2-84b5-d7f959acb2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073658601s
Jan 27 14:32:31.287: INFO: Pod "downwardapi-volume-8c3ed9bf-f93f-48d2-84b5-d7f959acb2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102574924s
Jan 27 14:32:33.297: INFO: Pod "downwardapi-volume-8c3ed9bf-f93f-48d2-84b5-d7f959acb2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112579518s
Jan 27 14:32:35.314: INFO: Pod "downwardapi-volume-8c3ed9bf-f93f-48d2-84b5-d7f959acb2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.12936397s
Jan 27 14:32:37.338: INFO: Pod "downwardapi-volume-8c3ed9bf-f93f-48d2-84b5-d7f959acb2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.153142211s
Jan 27 14:32:40.284: INFO: Pod "downwardapi-volume-8c3ed9bf-f93f-48d2-84b5-d7f959acb2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.099000332s
Jan 27 14:32:42.331: INFO: Pod "downwardapi-volume-8c3ed9bf-f93f-48d2-84b5-d7f959acb2f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.146048843s
STEP: Saw pod success
Jan 27 14:32:42.331: INFO: Pod "downwardapi-volume-8c3ed9bf-f93f-48d2-84b5-d7f959acb2f6" satisfied condition "success or failure"
Jan 27 14:32:42.335: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8c3ed9bf-f93f-48d2-84b5-d7f959acb2f6 container client-container: 
STEP: delete the pod
Jan 27 14:32:42.526: INFO: Waiting for pod downwardapi-volume-8c3ed9bf-f93f-48d2-84b5-d7f959acb2f6 to disappear
Jan 27 14:32:42.537: INFO: Pod downwardapi-volume-8c3ed9bf-f93f-48d2-84b5-d7f959acb2f6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:32:42.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4259" for this suite.
Jan 27 14:32:48.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:32:48.739: INFO: namespace downward-api-4259 deletion completed in 6.191817368s

• [SLOW TEST:23.716 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:32:48.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-6b008e9b-403b-4633-88bc-28e438c93fe1
STEP: Creating secret with name s-test-opt-upd-e3d41bc9-b759-4a1b-91c3-7e173cfabc5f
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-6b008e9b-403b-4633-88bc-28e438c93fe1
STEP: Updating secret s-test-opt-upd-e3d41bc9-b759-4a1b-91c3-7e173cfabc5f
STEP: Creating secret with name s-test-opt-create-54b28ece-b607-45cd-ae8d-f37beffda6a6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:34:17.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9021" for this suite.
Jan 27 14:34:39.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:34:39.401: INFO: namespace projected-9021 deletion completed in 22.333855232s

• [SLOW TEST:110.662 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:34:39.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 27 14:34:39.620: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a38f23d-a99c-44af-a93f-f53598777dc9" in namespace "downward-api-8523" to be "success or failure"
Jan 27 14:34:39.682: INFO: Pod "downwardapi-volume-8a38f23d-a99c-44af-a93f-f53598777dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 62.317917ms
Jan 27 14:34:41.696: INFO: Pod "downwardapi-volume-8a38f23d-a99c-44af-a93f-f53598777dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075853829s
Jan 27 14:34:43.707: INFO: Pod "downwardapi-volume-8a38f23d-a99c-44af-a93f-f53598777dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086537387s
Jan 27 14:34:45.720: INFO: Pod "downwardapi-volume-8a38f23d-a99c-44af-a93f-f53598777dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100014926s
Jan 27 14:34:47.729: INFO: Pod "downwardapi-volume-8a38f23d-a99c-44af-a93f-f53598777dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108441706s
Jan 27 14:34:49.736: INFO: Pod "downwardapi-volume-8a38f23d-a99c-44af-a93f-f53598777dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11606621s
Jan 27 14:34:51.833: INFO: Pod "downwardapi-volume-8a38f23d-a99c-44af-a93f-f53598777dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.213004087s
Jan 27 14:34:53.853: INFO: Pod "downwardapi-volume-8a38f23d-a99c-44af-a93f-f53598777dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.232758822s
Jan 27 14:34:55.862: INFO: Pod "downwardapi-volume-8a38f23d-a99c-44af-a93f-f53598777dc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.242278779s
STEP: Saw pod success
Jan 27 14:34:55.862: INFO: Pod "downwardapi-volume-8a38f23d-a99c-44af-a93f-f53598777dc9" satisfied condition "success or failure"
Jan 27 14:34:55.868: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8a38f23d-a99c-44af-a93f-f53598777dc9 container client-container: 
STEP: delete the pod
Jan 27 14:34:55.959: INFO: Waiting for pod downwardapi-volume-8a38f23d-a99c-44af-a93f-f53598777dc9 to disappear
Jan 27 14:34:55.970: INFO: Pod downwardapi-volume-8a38f23d-a99c-44af-a93f-f53598777dc9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:34:55.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8523" for this suite.
Jan 27 14:35:02.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:35:02.847: INFO: namespace downward-api-8523 deletion completed in 6.795243468s

• [SLOW TEST:23.445 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:35:02.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan 27 14:35:02.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8038'
Jan 27 14:35:05.881: INFO: stderr: ""
Jan 27 14:35:05.882: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 27 14:35:05.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8038'
Jan 27 14:35:06.028: INFO: stderr: ""
Jan 27 14:35:06.028: INFO: stdout: "update-demo-nautilus-kv4tw "
STEP: Replicas for name=update-demo: expected=2 actual=1
Jan 27 14:35:11.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8038'
Jan 27 14:35:11.190: INFO: stderr: ""
Jan 27 14:35:11.190: INFO: stdout: "update-demo-nautilus-kv4tw update-demo-nautilus-mgxrw "
Jan 27 14:35:11.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kv4tw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8038'
Jan 27 14:35:11.295: INFO: stderr: ""
Jan 27 14:35:11.295: INFO: stdout: ""
Jan 27 14:35:11.295: INFO: update-demo-nautilus-kv4tw is created but not running
Jan 27 14:35:16.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8038'
Jan 27 14:35:17.766: INFO: stderr: ""
Jan 27 14:35:17.766: INFO: stdout: "update-demo-nautilus-kv4tw update-demo-nautilus-mgxrw "
Jan 27 14:35:17.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kv4tw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8038'
Jan 27 14:35:18.333: INFO: stderr: ""
Jan 27 14:35:18.333: INFO: stdout: ""
Jan 27 14:35:18.333: INFO: update-demo-nautilus-kv4tw is created but not running
Jan 27 14:35:23.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8038'
Jan 27 14:35:23.513: INFO: stderr: ""
Jan 27 14:35:23.513: INFO: stdout: "update-demo-nautilus-kv4tw update-demo-nautilus-mgxrw "
Jan 27 14:35:23.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kv4tw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8038'
Jan 27 14:35:23.729: INFO: stderr: ""
Jan 27 14:35:23.729: INFO: stdout: "true"
Jan 27 14:35:23.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kv4tw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8038'
Jan 27 14:35:23.823: INFO: stderr: ""
Jan 27 14:35:23.823: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 27 14:35:23.823: INFO: validating pod update-demo-nautilus-kv4tw
Jan 27 14:35:23.893: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 27 14:35:23.893: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 27 14:35:23.893: INFO: update-demo-nautilus-kv4tw is verified up and running
Jan 27 14:35:23.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mgxrw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8038'
Jan 27 14:35:24.031: INFO: stderr: ""
Jan 27 14:35:24.031: INFO: stdout: "true"
Jan 27 14:35:24.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mgxrw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8038'
Jan 27 14:35:24.145: INFO: stderr: ""
Jan 27 14:35:24.145: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 27 14:35:24.145: INFO: validating pod update-demo-nautilus-mgxrw
Jan 27 14:35:24.188: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 27 14:35:24.188: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 27 14:35:24.188: INFO: update-demo-nautilus-mgxrw is verified up and running
STEP: using delete to clean up resources
Jan 27 14:35:24.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8038'
Jan 27 14:35:24.279: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 27 14:35:24.279: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 27 14:35:24.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8038'
Jan 27 14:35:24.381: INFO: stderr: "No resources found.\n"
Jan 27 14:35:24.381: INFO: stdout: ""
Jan 27 14:35:24.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8038 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 27 14:35:24.465: INFO: stderr: ""
Jan 27 14:35:24.465: INFO: stdout: "update-demo-nautilus-kv4tw\nupdate-demo-nautilus-mgxrw\n"
Jan 27 14:35:24.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8038'
Jan 27 14:35:26.137: INFO: stderr: "No resources found.\n"
Jan 27 14:35:26.137: INFO: stdout: ""
Jan 27 14:35:26.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8038 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 27 14:35:26.370: INFO: stderr: ""
Jan 27 14:35:26.370: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:35:26.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8038" for this suite.
Jan 27 14:35:50.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:35:50.548: INFO: namespace kubectl-8038 deletion completed in 24.171889537s

• [SLOW TEST:47.701 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:35:50.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Jan 27 14:35:50.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 27 14:35:50.877: INFO: stderr: ""
Jan 27 14:35:50.878: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:35:50.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3743" for this suite.
Jan 27 14:35:56.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:35:57.080: INFO: namespace kubectl-3743 deletion completed in 6.187135772s

• [SLOW TEST:6.531 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:35:57.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-9953
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9953 to expose endpoints map[]
Jan 27 14:35:57.495: INFO: successfully validated that service endpoint-test2 in namespace services-9953 exposes endpoints map[] (128.444049ms elapsed)
STEP: Creating pod pod1 in namespace services-9953
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9953 to expose endpoints map[pod1:[80]]
Jan 27 14:36:02.959: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.455055846s elapsed, will retry)
Jan 27 14:36:08.073: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (10.568908654s elapsed, will retry)
Jan 27 14:36:10.166: INFO: successfully validated that service endpoint-test2 in namespace services-9953 exposes endpoints map[pod1:[80]] (12.661383676s elapsed)
STEP: Creating pod pod2 in namespace services-9953
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9953 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 27 14:36:14.862: INFO: Unexpected endpoints: found map[f0830410-8882-46a0-9e21-5078123e6156:[80]], expected map[pod1:[80] pod2:[80]] (4.685374095s elapsed, will retry)
Jan 27 14:36:22.679: INFO: successfully validated that service endpoint-test2 in namespace services-9953 exposes endpoints map[pod1:[80] pod2:[80]] (12.502376608s elapsed)
STEP: Deleting pod pod1 in namespace services-9953
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9953 to expose endpoints map[pod2:[80]]
Jan 27 14:36:23.815: INFO: successfully validated that service endpoint-test2 in namespace services-9953 exposes endpoints map[pod2:[80]] (1.130843757s elapsed)
STEP: Deleting pod pod2 in namespace services-9953
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9953 to expose endpoints map[]
Jan 27 14:36:24.875: INFO: successfully validated that service endpoint-test2 in namespace services-9953 exposes endpoints map[] (1.043953922s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:36:26.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9953" for this suite.
Jan 27 14:36:32.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:36:33.080: INFO: namespace services-9953 deletion completed in 6.119424604s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:36.000 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:36:33.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0127 14:37:19.914493       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 27 14:37:19.914: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:37:19.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9413" for this suite.
Jan 27 14:37:46.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:37:46.583: INFO: namespace gc-9413 deletion completed in 26.649988671s

• [SLOW TEST:73.502 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:37:46.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-4f318b48-c1f7-4ba0-90de-e3bdb8f708b7
STEP: Creating a pod to test consume secrets
Jan 27 14:37:46.915: INFO: Waiting up to 5m0s for pod "pod-secrets-c88ff9ee-7dea-4f4e-a6a3-23117257b928" in namespace "secrets-785" to be "success or failure"
Jan 27 14:37:46.932: INFO: Pod "pod-secrets-c88ff9ee-7dea-4f4e-a6a3-23117257b928": Phase="Pending", Reason="", readiness=false. Elapsed: 16.104547ms
Jan 27 14:37:48.938: INFO: Pod "pod-secrets-c88ff9ee-7dea-4f4e-a6a3-23117257b928": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022761402s
Jan 27 14:37:50.945: INFO: Pod "pod-secrets-c88ff9ee-7dea-4f4e-a6a3-23117257b928": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029437587s
Jan 27 14:37:52.952: INFO: Pod "pod-secrets-c88ff9ee-7dea-4f4e-a6a3-23117257b928": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036537421s
Jan 27 14:37:54.958: INFO: Pod "pod-secrets-c88ff9ee-7dea-4f4e-a6a3-23117257b928": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042213162s
Jan 27 14:37:56.975: INFO: Pod "pod-secrets-c88ff9ee-7dea-4f4e-a6a3-23117257b928": Phase="Pending", Reason="", readiness=false. Elapsed: 10.059639535s
Jan 27 14:37:58.987: INFO: Pod "pod-secrets-c88ff9ee-7dea-4f4e-a6a3-23117257b928": Phase="Pending", Reason="", readiness=false. Elapsed: 12.071403339s
Jan 27 14:38:01.028: INFO: Pod "pod-secrets-c88ff9ee-7dea-4f4e-a6a3-23117257b928": Phase="Pending", Reason="", readiness=false. Elapsed: 14.112139182s
Jan 27 14:38:03.054: INFO: Pod "pod-secrets-c88ff9ee-7dea-4f4e-a6a3-23117257b928": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.13791518s
STEP: Saw pod success
Jan 27 14:38:03.054: INFO: Pod "pod-secrets-c88ff9ee-7dea-4f4e-a6a3-23117257b928" satisfied condition "success or failure"
Jan 27 14:38:03.060: INFO: Trying to get logs from node iruya-node pod pod-secrets-c88ff9ee-7dea-4f4e-a6a3-23117257b928 container secret-volume-test: 
STEP: delete the pod
Jan 27 14:38:03.119: INFO: Waiting for pod pod-secrets-c88ff9ee-7dea-4f4e-a6a3-23117257b928 to disappear
Jan 27 14:38:03.133: INFO: Pod pod-secrets-c88ff9ee-7dea-4f4e-a6a3-23117257b928 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:38:03.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-785" for this suite.
Jan 27 14:38:09.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:38:09.431: INFO: namespace secrets-785 deletion completed in 6.277682383s

• [SLOW TEST:22.847 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:38:09.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 27 14:38:09.639: INFO: Waiting up to 5m0s for pod "pod-5dff36df-b030-4a3a-bae3-af5611445673" in namespace "emptydir-609" to be "success or failure"
Jan 27 14:38:09.661: INFO: Pod "pod-5dff36df-b030-4a3a-bae3-af5611445673": Phase="Pending", Reason="", readiness=false. Elapsed: 22.048392ms
Jan 27 14:38:11.670: INFO: Pod "pod-5dff36df-b030-4a3a-bae3-af5611445673": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031478552s
Jan 27 14:38:13.679: INFO: Pod "pod-5dff36df-b030-4a3a-bae3-af5611445673": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040222067s
Jan 27 14:38:16.239: INFO: Pod "pod-5dff36df-b030-4a3a-bae3-af5611445673": Phase="Pending", Reason="", readiness=false. Elapsed: 6.600395237s
Jan 27 14:38:18.249: INFO: Pod "pod-5dff36df-b030-4a3a-bae3-af5611445673": Phase="Pending", Reason="", readiness=false. Elapsed: 8.610282156s
Jan 27 14:38:20.259: INFO: Pod "pod-5dff36df-b030-4a3a-bae3-af5611445673": Phase="Pending", Reason="", readiness=false. Elapsed: 10.620212748s
Jan 27 14:38:22.276: INFO: Pod "pod-5dff36df-b030-4a3a-bae3-af5611445673": Phase="Pending", Reason="", readiness=false. Elapsed: 12.637032858s
Jan 27 14:38:24.286: INFO: Pod "pod-5dff36df-b030-4a3a-bae3-af5611445673": Phase="Pending", Reason="", readiness=false. Elapsed: 14.647153385s
Jan 27 14:38:26.297: INFO: Pod "pod-5dff36df-b030-4a3a-bae3-af5611445673": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.657944332s
STEP: Saw pod success
Jan 27 14:38:26.297: INFO: Pod "pod-5dff36df-b030-4a3a-bae3-af5611445673" satisfied condition "success or failure"
Jan 27 14:38:26.312: INFO: Trying to get logs from node iruya-node pod pod-5dff36df-b030-4a3a-bae3-af5611445673 container test-container: 
STEP: delete the pod
Jan 27 14:38:26.481: INFO: Waiting for pod pod-5dff36df-b030-4a3a-bae3-af5611445673 to disappear
Jan 27 14:38:26.516: INFO: Pod pod-5dff36df-b030-4a3a-bae3-af5611445673 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:38:26.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-609" for this suite.
Jan 27 14:38:32.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:38:32.907: INFO: namespace emptydir-609 deletion completed in 6.346857055s

• [SLOW TEST:23.475 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:38:32.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 27 14:38:47.697: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4564 pod-service-account-df14a945-c96b-4693-9769-7f0593585186 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 27 14:38:48.290: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4564 pod-service-account-df14a945-c96b-4693-9769-7f0593585186 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 27 14:38:49.000: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4564 pod-service-account-df14a945-c96b-4693-9769-7f0593585186 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:38:49.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4564" for this suite.
Jan 27 14:38:55.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:38:55.671: INFO: namespace svcaccounts-4564 deletion completed in 6.258881827s

• [SLOW TEST:22.763 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:38:55.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:40:18.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5019" for this suite.
Jan 27 14:40:24.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:40:24.595: INFO: namespace container-runtime-5019 deletion completed in 6.109792109s

• [SLOW TEST:88.923 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:40:24.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 27 14:40:24.969: INFO: Number of nodes with available pods: 0
Jan 27 14:40:24.969: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:40:26.546: INFO: Number of nodes with available pods: 0
Jan 27 14:40:26.546: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:40:27.356: INFO: Number of nodes with available pods: 0
Jan 27 14:40:27.356: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:40:28.679: INFO: Number of nodes with available pods: 0
Jan 27 14:40:28.679: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:40:29.199: INFO: Number of nodes with available pods: 0
Jan 27 14:40:29.199: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:40:29.985: INFO: Number of nodes with available pods: 0
Jan 27 14:40:29.985: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:40:30.998: INFO: Number of nodes with available pods: 0
Jan 27 14:40:30.998: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:40:31.993: INFO: Number of nodes with available pods: 0
Jan 27 14:40:31.993: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:40:35.247: INFO: Number of nodes with available pods: 0
Jan 27 14:40:35.247: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:40:36.416: INFO: Number of nodes with available pods: 0
Jan 27 14:40:36.416: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:40:36.985: INFO: Number of nodes with available pods: 0
Jan 27 14:40:36.985: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:40:38.354: INFO: Number of nodes with available pods: 0
Jan 27 14:40:38.354: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:40:39.129: INFO: Number of nodes with available pods: 0
Jan 27 14:40:39.129: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:40:39.999: INFO: Number of nodes with available pods: 0
Jan 27 14:40:40.000: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:40:40.982: INFO: Number of nodes with available pods: 2
Jan 27 14:40:40.982: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 27 14:40:41.009: INFO: Number of nodes with available pods: 1
Jan 27 14:40:41.009: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:42.033: INFO: Number of nodes with available pods: 1
Jan 27 14:40:42.033: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:43.046: INFO: Number of nodes with available pods: 1
Jan 27 14:40:43.047: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:44.795: INFO: Number of nodes with available pods: 1
Jan 27 14:40:44.795: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:45.187: INFO: Number of nodes with available pods: 1
Jan 27 14:40:45.187: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:46.024: INFO: Number of nodes with available pods: 1
Jan 27 14:40:46.024: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:47.100: INFO: Number of nodes with available pods: 1
Jan 27 14:40:47.100: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:48.027: INFO: Number of nodes with available pods: 1
Jan 27 14:40:48.027: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:49.024: INFO: Number of nodes with available pods: 1
Jan 27 14:40:49.024: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:50.074: INFO: Number of nodes with available pods: 1
Jan 27 14:40:50.074: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:51.021: INFO: Number of nodes with available pods: 1
Jan 27 14:40:51.021: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:52.047: INFO: Number of nodes with available pods: 1
Jan 27 14:40:52.047: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:53.030: INFO: Number of nodes with available pods: 1
Jan 27 14:40:53.030: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:54.081: INFO: Number of nodes with available pods: 1
Jan 27 14:40:54.081: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:55.068: INFO: Number of nodes with available pods: 1
Jan 27 14:40:55.068: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:56.052: INFO: Number of nodes with available pods: 1
Jan 27 14:40:56.052: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:57.020: INFO: Number of nodes with available pods: 1
Jan 27 14:40:57.020: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:40:58.036: INFO: Number of nodes with available pods: 1
Jan 27 14:40:58.036: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:41:01.951: INFO: Number of nodes with available pods: 1
Jan 27 14:41:01.951: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:41:02.023: INFO: Number of nodes with available pods: 1
Jan 27 14:41:02.023: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:41:03.042: INFO: Number of nodes with available pods: 1
Jan 27 14:41:03.042: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:41:04.025: INFO: Number of nodes with available pods: 1
Jan 27 14:41:04.025: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:41:05.021: INFO: Number of nodes with available pods: 1
Jan 27 14:41:05.021: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:41:07.442: INFO: Number of nodes with available pods: 1
Jan 27 14:41:07.442: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:41:08.056: INFO: Number of nodes with available pods: 1
Jan 27 14:41:08.056: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:41:09.588: INFO: Number of nodes with available pods: 1
Jan 27 14:41:09.588: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:41:10.131: INFO: Number of nodes with available pods: 1
Jan 27 14:41:10.131: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:41:11.026: INFO: Number of nodes with available pods: 1
Jan 27 14:41:11.026: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:41:12.030: INFO: Number of nodes with available pods: 2
Jan 27 14:41:12.030: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5884, will wait for the garbage collector to delete the pods
Jan 27 14:41:12.104: INFO: Deleting DaemonSet.extensions daemon-set took: 13.461317ms
Jan 27 14:41:12.504: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.299675ms
Jan 27 14:41:27.947: INFO: Number of nodes with available pods: 0
Jan 27 14:41:27.947: INFO: Number of running nodes: 0, number of available pods: 0
Jan 27 14:41:27.955: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5884/daemonsets","resourceVersion":"22074941"},"items":null}

Jan 27 14:41:27.959: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5884/pods","resourceVersion":"22074941"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:41:27.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5884" for this suite.
Jan 27 14:41:36.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:41:36.129: INFO: namespace daemonsets-5884 deletion completed in 8.151228344s

• [SLOW TEST:71.534 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:41:36.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-1308/secret-test-848aff38-bdf3-452e-8a4f-2232bf63b6bf
STEP: Creating a pod to test consume secrets
Jan 27 14:41:36.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-ef7dc760-31e2-48c7-8d69-1b0358bc1207" in namespace "secrets-1308" to be "success or failure"
Jan 27 14:41:36.379: INFO: Pod "pod-configmaps-ef7dc760-31e2-48c7-8d69-1b0358bc1207": Phase="Pending", Reason="", readiness=false. Elapsed: 4.662953ms
Jan 27 14:41:38.388: INFO: Pod "pod-configmaps-ef7dc760-31e2-48c7-8d69-1b0358bc1207": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013032839s
Jan 27 14:41:40.396: INFO: Pod "pod-configmaps-ef7dc760-31e2-48c7-8d69-1b0358bc1207": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021061078s
Jan 27 14:41:42.404: INFO: Pod "pod-configmaps-ef7dc760-31e2-48c7-8d69-1b0358bc1207": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02893815s
Jan 27 14:41:44.415: INFO: Pod "pod-configmaps-ef7dc760-31e2-48c7-8d69-1b0358bc1207": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0402279s
Jan 27 14:41:46.426: INFO: Pod "pod-configmaps-ef7dc760-31e2-48c7-8d69-1b0358bc1207": Phase="Pending", Reason="", readiness=false. Elapsed: 10.050925652s
Jan 27 14:41:48.435: INFO: Pod "pod-configmaps-ef7dc760-31e2-48c7-8d69-1b0358bc1207": Phase="Pending", Reason="", readiness=false. Elapsed: 12.060279837s
Jan 27 14:41:50.881: INFO: Pod "pod-configmaps-ef7dc760-31e2-48c7-8d69-1b0358bc1207": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.506524243s
STEP: Saw pod success
Jan 27 14:41:50.881: INFO: Pod "pod-configmaps-ef7dc760-31e2-48c7-8d69-1b0358bc1207" satisfied condition "success or failure"
Jan 27 14:41:50.913: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ef7dc760-31e2-48c7-8d69-1b0358bc1207 container env-test: 
STEP: delete the pod
Jan 27 14:41:51.130: INFO: Waiting for pod pod-configmaps-ef7dc760-31e2-48c7-8d69-1b0358bc1207 to disappear
Jan 27 14:41:51.175: INFO: Pod pod-configmaps-ef7dc760-31e2-48c7-8d69-1b0358bc1207 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:41:51.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1308" for this suite.
Jan 27 14:41:57.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:41:57.438: INFO: namespace secrets-1308 deletion completed in 6.256267516s

• [SLOW TEST:21.308 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:41:57.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 27 14:41:57.611: INFO: Waiting up to 5m0s for pod "pod-30692642-c5ae-4aa8-8e84-91884d34207e" in namespace "emptydir-4222" to be "success or failure"
Jan 27 14:41:57.622: INFO: Pod "pod-30692642-c5ae-4aa8-8e84-91884d34207e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.371756ms
Jan 27 14:41:59.638: INFO: Pod "pod-30692642-c5ae-4aa8-8e84-91884d34207e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026970708s
Jan 27 14:42:01.655: INFO: Pod "pod-30692642-c5ae-4aa8-8e84-91884d34207e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043879148s
Jan 27 14:42:03.667: INFO: Pod "pod-30692642-c5ae-4aa8-8e84-91884d34207e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055546115s
Jan 27 14:42:05.674: INFO: Pod "pod-30692642-c5ae-4aa8-8e84-91884d34207e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06297319s
Jan 27 14:42:07.689: INFO: Pod "pod-30692642-c5ae-4aa8-8e84-91884d34207e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.077316218s
Jan 27 14:42:09.696: INFO: Pod "pod-30692642-c5ae-4aa8-8e84-91884d34207e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.084460348s
Jan 27 14:42:11.709: INFO: Pod "pod-30692642-c5ae-4aa8-8e84-91884d34207e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.097622558s
Jan 27 14:42:13.718: INFO: Pod "pod-30692642-c5ae-4aa8-8e84-91884d34207e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.106018666s
STEP: Saw pod success
Jan 27 14:42:13.718: INFO: Pod "pod-30692642-c5ae-4aa8-8e84-91884d34207e" satisfied condition "success or failure"
Jan 27 14:42:13.722: INFO: Trying to get logs from node iruya-node pod pod-30692642-c5ae-4aa8-8e84-91884d34207e container test-container: 
STEP: delete the pod
Jan 27 14:42:13.897: INFO: Waiting for pod pod-30692642-c5ae-4aa8-8e84-91884d34207e to disappear
Jan 27 14:42:13.918: INFO: Pod pod-30692642-c5ae-4aa8-8e84-91884d34207e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:42:13.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4222" for this suite.
Jan 27 14:42:19.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:42:20.102: INFO: namespace emptydir-4222 deletion completed in 6.169371942s

• [SLOW TEST:22.663 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:42:20.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 27 14:42:20.241: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e00ef87-eb08-4679-a7b8-44b6ad161031" in namespace "downward-api-2899" to be "success or failure"
Jan 27 14:42:20.256: INFO: Pod "downwardapi-volume-1e00ef87-eb08-4679-a7b8-44b6ad161031": Phase="Pending", Reason="", readiness=false. Elapsed: 14.842882ms
Jan 27 14:42:22.429: INFO: Pod "downwardapi-volume-1e00ef87-eb08-4679-a7b8-44b6ad161031": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18744587s
Jan 27 14:42:24.448: INFO: Pod "downwardapi-volume-1e00ef87-eb08-4679-a7b8-44b6ad161031": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206761553s
Jan 27 14:42:26.462: INFO: Pod "downwardapi-volume-1e00ef87-eb08-4679-a7b8-44b6ad161031": Phase="Pending", Reason="", readiness=false. Elapsed: 6.220035126s
Jan 27 14:42:28.473: INFO: Pod "downwardapi-volume-1e00ef87-eb08-4679-a7b8-44b6ad161031": Phase="Pending", Reason="", readiness=false. Elapsed: 8.231541118s
Jan 27 14:42:30.489: INFO: Pod "downwardapi-volume-1e00ef87-eb08-4679-a7b8-44b6ad161031": Phase="Pending", Reason="", readiness=false. Elapsed: 10.24767164s
Jan 27 14:42:32.502: INFO: Pod "downwardapi-volume-1e00ef87-eb08-4679-a7b8-44b6ad161031": Phase="Pending", Reason="", readiness=false. Elapsed: 12.25993267s
Jan 27 14:42:34.516: INFO: Pod "downwardapi-volume-1e00ef87-eb08-4679-a7b8-44b6ad161031": Phase="Pending", Reason="", readiness=false. Elapsed: 14.274556429s
Jan 27 14:42:36.535: INFO: Pod "downwardapi-volume-1e00ef87-eb08-4679-a7b8-44b6ad161031": Phase="Pending", Reason="", readiness=false. Elapsed: 16.293844366s
Jan 27 14:42:38.556: INFO: Pod "downwardapi-volume-1e00ef87-eb08-4679-a7b8-44b6ad161031": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.314897548s
STEP: Saw pod success
Jan 27 14:42:38.557: INFO: Pod "downwardapi-volume-1e00ef87-eb08-4679-a7b8-44b6ad161031" satisfied condition "success or failure"
Jan 27 14:42:38.566: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1e00ef87-eb08-4679-a7b8-44b6ad161031 container client-container: 
STEP: delete the pod
Jan 27 14:42:38.989: INFO: Waiting for pod downwardapi-volume-1e00ef87-eb08-4679-a7b8-44b6ad161031 to disappear
Jan 27 14:42:39.054: INFO: Pod downwardapi-volume-1e00ef87-eb08-4679-a7b8-44b6ad161031 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:42:39.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2899" for this suite.
Jan 27 14:42:45.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:42:45.180: INFO: namespace downward-api-2899 deletion completed in 6.08560106s

• [SLOW TEST:25.078 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:42:45.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 27 14:42:45.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8031'
Jan 27 14:42:45.521: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 27 14:42:45.521: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 27 14:42:45.551: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 27 14:42:45.573: INFO: scanned /root for discovery docs: 
Jan 27 14:42:45.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-8031'
Jan 27 14:43:14.366: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 27 14:43:14.366: INFO: stdout: "Created e2e-test-nginx-rc-76a6bae5e3042cba0b70cbe4cf1acb05\nScaling up e2e-test-nginx-rc-76a6bae5e3042cba0b70cbe4cf1acb05 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-76a6bae5e3042cba0b70cbe4cf1acb05 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-76a6bae5e3042cba0b70cbe4cf1acb05 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 27 14:43:14.367: INFO: stdout: "Created e2e-test-nginx-rc-76a6bae5e3042cba0b70cbe4cf1acb05\nScaling up e2e-test-nginx-rc-76a6bae5e3042cba0b70cbe4cf1acb05 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-76a6bae5e3042cba0b70cbe4cf1acb05 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-76a6bae5e3042cba0b70cbe4cf1acb05 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 27 14:43:14.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8031'
Jan 27 14:43:15.354: INFO: stderr: ""
Jan 27 14:43:15.355: INFO: stdout: "e2e-test-nginx-rc-76a6bae5e3042cba0b70cbe4cf1acb05-vxt5t "
Jan 27 14:43:15.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-76a6bae5e3042cba0b70cbe4cf1acb05-vxt5t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8031'
Jan 27 14:43:15.518: INFO: stderr: ""
Jan 27 14:43:15.518: INFO: stdout: "true"
Jan 27 14:43:15.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-76a6bae5e3042cba0b70cbe4cf1acb05-vxt5t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8031'
Jan 27 14:43:15.631: INFO: stderr: ""
Jan 27 14:43:15.631: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 27 14:43:15.632: INFO: e2e-test-nginx-rc-76a6bae5e3042cba0b70cbe4cf1acb05-vxt5t is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Jan 27 14:43:15.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8031'
Jan 27 14:43:15.742: INFO: stderr: ""
Jan 27 14:43:15.742: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:43:15.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8031" for this suite.
Jan 27 14:43:39.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:43:40.006: INFO: namespace kubectl-8031 deletion completed in 24.258910267s

• [SLOW TEST:54.825 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:43:40.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-be9b07bc-0b65-45a0-9a38-41e248796e23
STEP: Creating a pod to test consume secrets
Jan 27 14:43:40.223: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3817785b-6481-4b3a-89aa-e2826e0a48d6" in namespace "projected-1058" to be "success or failure"
Jan 27 14:43:40.326: INFO: Pod "pod-projected-secrets-3817785b-6481-4b3a-89aa-e2826e0a48d6": Phase="Pending", Reason="", readiness=false. Elapsed: 103.606866ms
Jan 27 14:43:42.331: INFO: Pod "pod-projected-secrets-3817785b-6481-4b3a-89aa-e2826e0a48d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108568399s
Jan 27 14:43:44.340: INFO: Pod "pod-projected-secrets-3817785b-6481-4b3a-89aa-e2826e0a48d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11693151s
Jan 27 14:43:46.660: INFO: Pod "pod-projected-secrets-3817785b-6481-4b3a-89aa-e2826e0a48d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437389114s
Jan 27 14:43:48.670: INFO: Pod "pod-projected-secrets-3817785b-6481-4b3a-89aa-e2826e0a48d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.447387464s
Jan 27 14:43:50.678: INFO: Pod "pod-projected-secrets-3817785b-6481-4b3a-89aa-e2826e0a48d6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.455106486s
Jan 27 14:43:52.718: INFO: Pod "pod-projected-secrets-3817785b-6481-4b3a-89aa-e2826e0a48d6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.495343042s
Jan 27 14:43:54.741: INFO: Pod "pod-projected-secrets-3817785b-6481-4b3a-89aa-e2826e0a48d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.518513535s
STEP: Saw pod success
Jan 27 14:43:54.741: INFO: Pod "pod-projected-secrets-3817785b-6481-4b3a-89aa-e2826e0a48d6" satisfied condition "success or failure"
Jan 27 14:43:54.750: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3817785b-6481-4b3a-89aa-e2826e0a48d6 container projected-secret-volume-test: 
STEP: delete the pod
Jan 27 14:43:54.956: INFO: Waiting for pod pod-projected-secrets-3817785b-6481-4b3a-89aa-e2826e0a48d6 to disappear
Jan 27 14:43:54.979: INFO: Pod pod-projected-secrets-3817785b-6481-4b3a-89aa-e2826e0a48d6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:43:54.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1058" for this suite.
Jan 27 14:44:01.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:44:01.202: INFO: namespace projected-1058 deletion completed in 6.216221569s

• [SLOW TEST:21.196 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:44:01.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Jan 27 14:44:01.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1353'
Jan 27 14:44:01.947: INFO: stderr: ""
Jan 27 14:44:01.947: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Jan 27 14:44:02.976: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:44:02.976: INFO: Found 0 / 1
Jan 27 14:44:03.959: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:44:03.960: INFO: Found 0 / 1
Jan 27 14:44:04.999: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:44:04.999: INFO: Found 0 / 1
Jan 27 14:44:05.957: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:44:05.957: INFO: Found 0 / 1
Jan 27 14:44:07.002: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:44:07.002: INFO: Found 0 / 1
Jan 27 14:44:07.954: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:44:07.954: INFO: Found 0 / 1
Jan 27 14:44:08.994: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:44:08.994: INFO: Found 0 / 1
Jan 27 14:44:09.958: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:44:09.958: INFO: Found 0 / 1
Jan 27 14:44:11.049: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:44:11.049: INFO: Found 0 / 1
Jan 27 14:44:11.954: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:44:11.954: INFO: Found 0 / 1
Jan 27 14:44:12.957: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:44:12.957: INFO: Found 0 / 1
Jan 27 14:44:13.969: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:44:13.970: INFO: Found 0 / 1
Jan 27 14:44:14.956: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:44:14.957: INFO: Found 0 / 1
Jan 27 14:44:15.959: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:44:15.959: INFO: Found 1 / 1
Jan 27 14:44:15.960: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 27 14:44:15.966: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 14:44:15.966: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan 27 14:44:15.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-n9b44 redis-master --namespace=kubectl-1353'
Jan 27 14:44:16.200: INFO: stderr: ""
Jan 27 14:44:16.201: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 27 Jan 14:44:14.404 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Jan 14:44:14.405 # Server started, Redis version 3.2.12\n1:M 27 Jan 14:44:14.405 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Jan 14:44:14.405 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan 27 14:44:16.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-n9b44 redis-master --namespace=kubectl-1353 --tail=1'
Jan 27 14:44:16.495: INFO: stderr: ""
Jan 27 14:44:16.495: INFO: stdout: "1:M 27 Jan 14:44:14.405 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan 27 14:44:16.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-n9b44 redis-master --namespace=kubectl-1353 --limit-bytes=1'
Jan 27 14:44:16.656: INFO: stderr: ""
Jan 27 14:44:16.656: INFO: stdout: " "
STEP: exposing timestamps
Jan 27 14:44:16.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-n9b44 redis-master --namespace=kubectl-1353 --tail=1 --timestamps'
Jan 27 14:44:16.912: INFO: stderr: ""
Jan 27 14:44:16.912: INFO: stdout: "2020-01-27T14:44:14.406211141Z 1:M 27 Jan 14:44:14.405 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan 27 14:44:19.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-n9b44 redis-master --namespace=kubectl-1353 --since=1s'
Jan 27 14:44:19.611: INFO: stderr: ""
Jan 27 14:44:19.611: INFO: stdout: ""
Jan 27 14:44:19.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-n9b44 redis-master --namespace=kubectl-1353 --since=24h'
Jan 27 14:44:19.753: INFO: stderr: ""
Jan 27 14:44:19.754: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 27 Jan 14:44:14.404 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Jan 14:44:14.405 # Server started, Redis version 3.2.12\n1:M 27 Jan 14:44:14.405 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Jan 14:44:14.405 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Jan 27 14:44:19.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1353'
Jan 27 14:44:19.874: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 27 14:44:19.874: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan 27 14:44:19.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1353'
Jan 27 14:44:20.002: INFO: stderr: "No resources found.\n"
Jan 27 14:44:20.002: INFO: stdout: ""
Jan 27 14:44:20.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1353 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 27 14:44:20.150: INFO: stderr: ""
Jan 27 14:44:20.150: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:44:20.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1353" for this suite.
Jan 27 14:44:44.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:44:44.320: INFO: namespace kubectl-1353 deletion completed in 24.160405209s

• [SLOW TEST:43.118 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:44:44.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-1e5d352d-c7af-4d4c-a4e3-7ecf11a7c2e1
Jan 27 14:44:44.574: INFO: Pod name my-hostname-basic-1e5d352d-c7af-4d4c-a4e3-7ecf11a7c2e1: Found 0 pods out of 1
Jan 27 14:44:49.703: INFO: Pod name my-hostname-basic-1e5d352d-c7af-4d4c-a4e3-7ecf11a7c2e1: Found 1 pods out of 1
Jan 27 14:44:49.703: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1e5d352d-c7af-4d4c-a4e3-7ecf11a7c2e1" are running
Jan 27 14:44:59.739: INFO: Pod "my-hostname-basic-1e5d352d-c7af-4d4c-a4e3-7ecf11a7c2e1-rbdt9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 14:44:44 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 14:44:44 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1e5d352d-c7af-4d4c-a4e3-7ecf11a7c2e1]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 14:44:44 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1e5d352d-c7af-4d4c-a4e3-7ecf11a7c2e1]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 14:44:44 +0000 UTC Reason: Message:}])
Jan 27 14:44:59.740: INFO: Trying to dial the pod
Jan 27 14:45:04.811: INFO: Controller my-hostname-basic-1e5d352d-c7af-4d4c-a4e3-7ecf11a7c2e1: Got expected result from replica 1 [my-hostname-basic-1e5d352d-c7af-4d4c-a4e3-7ecf11a7c2e1-rbdt9]: "my-hostname-basic-1e5d352d-c7af-4d4c-a4e3-7ecf11a7c2e1-rbdt9", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:45:04.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1339" for this suite.
Jan 27 14:45:10.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:45:10.981: INFO: namespace replication-controller-1339 deletion completed in 6.165177153s

• [SLOW TEST:26.660 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:45:10.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-6903/configmap-test-a59a90c2-72bf-4f0e-9967-5e3e188e0a67
STEP: Creating a pod to test consume configMaps
Jan 27 14:45:11.476: INFO: Waiting up to 5m0s for pod "pod-configmaps-b5ff8469-0621-4c4b-b716-16972f329664" in namespace "configmap-6903" to be "success or failure"
Jan 27 14:45:11.493: INFO: Pod "pod-configmaps-b5ff8469-0621-4c4b-b716-16972f329664": Phase="Pending", Reason="", readiness=false. Elapsed: 16.617525ms
Jan 27 14:45:13.501: INFO: Pod "pod-configmaps-b5ff8469-0621-4c4b-b716-16972f329664": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025456259s
Jan 27 14:45:15.510: INFO: Pod "pod-configmaps-b5ff8469-0621-4c4b-b716-16972f329664": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033666708s
Jan 27 14:45:17.519: INFO: Pod "pod-configmaps-b5ff8469-0621-4c4b-b716-16972f329664": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042962049s
Jan 27 14:45:19.528: INFO: Pod "pod-configmaps-b5ff8469-0621-4c4b-b716-16972f329664": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052087298s
Jan 27 14:45:21.535: INFO: Pod "pod-configmaps-b5ff8469-0621-4c4b-b716-16972f329664": Phase="Pending", Reason="", readiness=false. Elapsed: 10.059239534s
Jan 27 14:45:23.543: INFO: Pod "pod-configmaps-b5ff8469-0621-4c4b-b716-16972f329664": Phase="Pending", Reason="", readiness=false. Elapsed: 12.067225789s
Jan 27 14:45:25.551: INFO: Pod "pod-configmaps-b5ff8469-0621-4c4b-b716-16972f329664": Phase="Pending", Reason="", readiness=false. Elapsed: 14.07502872s
Jan 27 14:45:27.561: INFO: Pod "pod-configmaps-b5ff8469-0621-4c4b-b716-16972f329664": Phase="Pending", Reason="", readiness=false. Elapsed: 16.084885976s
Jan 27 14:45:29.569: INFO: Pod "pod-configmaps-b5ff8469-0621-4c4b-b716-16972f329664": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.092890731s
STEP: Saw pod success
Jan 27 14:45:29.569: INFO: Pod "pod-configmaps-b5ff8469-0621-4c4b-b716-16972f329664" satisfied condition "success or failure"
Jan 27 14:45:29.573: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b5ff8469-0621-4c4b-b716-16972f329664 container env-test: 
STEP: delete the pod
Jan 27 14:45:29.649: INFO: Waiting for pod pod-configmaps-b5ff8469-0621-4c4b-b716-16972f329664 to disappear
Jan 27 14:45:29.663: INFO: Pod pod-configmaps-b5ff8469-0621-4c4b-b716-16972f329664 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:45:29.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6903" for this suite.
Jan 27 14:45:35.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:45:35.948: INFO: namespace configmap-6903 deletion completed in 6.280214381s

• [SLOW TEST:24.967 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:45:35.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 27 14:45:36.111: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8491,SelfLink:/api/v1/namespaces/watch-8491/configmaps/e2e-watch-test-configmap-a,UID:de2fa4c5-c2b2-4f43-a697-c016b4ede231,ResourceVersion:22075557,Generation:0,CreationTimestamp:2020-01-27 14:45:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 27 14:45:36.112: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8491,SelfLink:/api/v1/namespaces/watch-8491/configmaps/e2e-watch-test-configmap-a,UID:de2fa4c5-c2b2-4f43-a697-c016b4ede231,ResourceVersion:22075557,Generation:0,CreationTimestamp:2020-01-27 14:45:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 27 14:45:46.138: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8491,SelfLink:/api/v1/namespaces/watch-8491/configmaps/e2e-watch-test-configmap-a,UID:de2fa4c5-c2b2-4f43-a697-c016b4ede231,ResourceVersion:22075571,Generation:0,CreationTimestamp:2020-01-27 14:45:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 27 14:45:46.139: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8491,SelfLink:/api/v1/namespaces/watch-8491/configmaps/e2e-watch-test-configmap-a,UID:de2fa4c5-c2b2-4f43-a697-c016b4ede231,ResourceVersion:22075571,Generation:0,CreationTimestamp:2020-01-27 14:45:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 27 14:45:56.153: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8491,SelfLink:/api/v1/namespaces/watch-8491/configmaps/e2e-watch-test-configmap-a,UID:de2fa4c5-c2b2-4f43-a697-c016b4ede231,ResourceVersion:22075585,Generation:0,CreationTimestamp:2020-01-27 14:45:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 27 14:45:56.154: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8491,SelfLink:/api/v1/namespaces/watch-8491/configmaps/e2e-watch-test-configmap-a,UID:de2fa4c5-c2b2-4f43-a697-c016b4ede231,ResourceVersion:22075585,Generation:0,CreationTimestamp:2020-01-27 14:45:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 27 14:46:06.165: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8491,SelfLink:/api/v1/namespaces/watch-8491/configmaps/e2e-watch-test-configmap-a,UID:de2fa4c5-c2b2-4f43-a697-c016b4ede231,ResourceVersion:22075598,Generation:0,CreationTimestamp:2020-01-27 14:45:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 27 14:46:06.165: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8491,SelfLink:/api/v1/namespaces/watch-8491/configmaps/e2e-watch-test-configmap-a,UID:de2fa4c5-c2b2-4f43-a697-c016b4ede231,ResourceVersion:22075598,Generation:0,CreationTimestamp:2020-01-27 14:45:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 27 14:46:16.179: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8491,SelfLink:/api/v1/namespaces/watch-8491/configmaps/e2e-watch-test-configmap-b,UID:28f8de5c-f63a-4692-992a-9155b241f7d1,ResourceVersion:22075612,Generation:0,CreationTimestamp:2020-01-27 14:46:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 27 14:46:16.179: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8491,SelfLink:/api/v1/namespaces/watch-8491/configmaps/e2e-watch-test-configmap-b,UID:28f8de5c-f63a-4692-992a-9155b241f7d1,ResourceVersion:22075612,Generation:0,CreationTimestamp:2020-01-27 14:46:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 27 14:46:26.194: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8491,SelfLink:/api/v1/namespaces/watch-8491/configmaps/e2e-watch-test-configmap-b,UID:28f8de5c-f63a-4692-992a-9155b241f7d1,ResourceVersion:22075627,Generation:0,CreationTimestamp:2020-01-27 14:46:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 27 14:46:26.195: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8491,SelfLink:/api/v1/namespaces/watch-8491/configmaps/e2e-watch-test-configmap-b,UID:28f8de5c-f63a-4692-992a-9155b241f7d1,ResourceVersion:22075627,Generation:0,CreationTimestamp:2020-01-27 14:46:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:46:36.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8491" for this suite.
Jan 27 14:46:42.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:46:42.407: INFO: namespace watch-8491 deletion completed in 6.202963695s

• [SLOW TEST:66.458 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:46:42.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6023
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 27 14:46:42.633: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 27 14:47:34.932: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-6023 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 14:47:34.932: INFO: >>> kubeConfig: /root/.kube/config
I0127 14:47:35.002499       8 log.go:172] (0xc000a306e0) (0xc00322cc80) Create stream
I0127 14:47:35.002638       8 log.go:172] (0xc000a306e0) (0xc00322cc80) Stream added, broadcasting: 1
I0127 14:47:35.010175       8 log.go:172] (0xc000a306e0) Reply frame received for 1
I0127 14:47:35.010202       8 log.go:172] (0xc000a306e0) (0xc0010470e0) Create stream
I0127 14:47:35.010209       8 log.go:172] (0xc000a306e0) (0xc0010470e0) Stream added, broadcasting: 3
I0127 14:47:35.012066       8 log.go:172] (0xc000a306e0) Reply frame received for 3
I0127 14:47:35.012095       8 log.go:172] (0xc000a306e0) (0xc0018f26e0) Create stream
I0127 14:47:35.012106       8 log.go:172] (0xc000a306e0) (0xc0018f26e0) Stream added, broadcasting: 5
I0127 14:47:35.013709       8 log.go:172] (0xc000a306e0) Reply frame received for 5
I0127 14:47:35.210070       8 log.go:172] (0xc000a306e0) Data frame received for 3
I0127 14:47:35.210272       8 log.go:172] (0xc0010470e0) (3) Data frame handling
I0127 14:47:35.210358       8 log.go:172] (0xc0010470e0) (3) Data frame sent
I0127 14:47:35.357153       8 log.go:172] (0xc000a306e0) Data frame received for 1
I0127 14:47:35.357304       8 log.go:172] (0xc000a306e0) (0xc0010470e0) Stream removed, broadcasting: 3
I0127 14:47:35.357398       8 log.go:172] (0xc00322cc80) (1) Data frame handling
I0127 14:47:35.357421       8 log.go:172] (0xc00322cc80) (1) Data frame sent
I0127 14:47:35.357487       8 log.go:172] (0xc000a306e0) (0xc0018f26e0) Stream removed, broadcasting: 5
I0127 14:47:35.357549       8 log.go:172] (0xc000a306e0) (0xc00322cc80) Stream removed, broadcasting: 1
I0127 14:47:35.357581       8 log.go:172] (0xc000a306e0) Go away received
I0127 14:47:35.358458       8 log.go:172] (0xc000a306e0) (0xc00322cc80) Stream removed, broadcasting: 1
I0127 14:47:35.358586       8 log.go:172] (0xc000a306e0) (0xc0010470e0) Stream removed, broadcasting: 3
I0127 14:47:35.358626       8 log.go:172] (0xc000a306e0) (0xc0018f26e0) Stream removed, broadcasting: 5
Jan 27 14:47:35.358: INFO: Waiting for endpoints: map[]
Jan 27 14:47:35.374: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-6023 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 14:47:35.374: INFO: >>> kubeConfig: /root/.kube/config
I0127 14:47:35.468659       8 log.go:172] (0xc00332abb0) (0xc0018f2d20) Create stream
I0127 14:47:35.468717       8 log.go:172] (0xc00332abb0) (0xc0018f2d20) Stream added, broadcasting: 1
I0127 14:47:35.476661       8 log.go:172] (0xc00332abb0) Reply frame received for 1
I0127 14:47:35.476698       8 log.go:172] (0xc00332abb0) (0xc001047180) Create stream
I0127 14:47:35.476720       8 log.go:172] (0xc00332abb0) (0xc001047180) Stream added, broadcasting: 3
I0127 14:47:35.478124       8 log.go:172] (0xc00332abb0) Reply frame received for 3
I0127 14:47:35.478157       8 log.go:172] (0xc00332abb0) (0xc00209c000) Create stream
I0127 14:47:35.478177       8 log.go:172] (0xc00332abb0) (0xc00209c000) Stream added, broadcasting: 5
I0127 14:47:35.479437       8 log.go:172] (0xc00332abb0) Reply frame received for 5
I0127 14:47:35.597455       8 log.go:172] (0xc00332abb0) Data frame received for 3
I0127 14:47:35.597517       8 log.go:172] (0xc001047180) (3) Data frame handling
I0127 14:47:35.597537       8 log.go:172] (0xc001047180) (3) Data frame sent
I0127 14:47:35.704644       8 log.go:172] (0xc00332abb0) Data frame received for 1
I0127 14:47:35.704774       8 log.go:172] (0xc00332abb0) (0xc001047180) Stream removed, broadcasting: 3
I0127 14:47:35.704827       8 log.go:172] (0xc0018f2d20) (1) Data frame handling
I0127 14:47:35.704880       8 log.go:172] (0xc0018f2d20) (1) Data frame sent
I0127 14:47:35.704917       8 log.go:172] (0xc00332abb0) (0xc00209c000) Stream removed, broadcasting: 5
I0127 14:47:35.704983       8 log.go:172] (0xc00332abb0) (0xc0018f2d20) Stream removed, broadcasting: 1
I0127 14:47:35.705014       8 log.go:172] (0xc00332abb0) Go away received
I0127 14:47:35.705477       8 log.go:172] (0xc00332abb0) (0xc0018f2d20) Stream removed, broadcasting: 1
I0127 14:47:35.705495       8 log.go:172] (0xc00332abb0) (0xc001047180) Stream removed, broadcasting: 3
I0127 14:47:35.705507       8 log.go:172] (0xc00332abb0) (0xc00209c000) Stream removed, broadcasting: 5
Jan 27 14:47:35.705: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:47:35.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6023" for this suite.
Jan 27 14:47:59.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:48:00.017: INFO: namespace pod-network-test-6023 deletion completed in 24.299893872s

• [SLOW TEST:77.610 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:48:00.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 27 14:48:16.780: INFO: Successfully updated pod "pod-update-543c7bca-9432-4798-bbc1-989ff2f86978"
STEP: verifying the updated pod is in kubernetes
Jan 27 14:48:18.528: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:48:18.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2960" for this suite.
Jan 27 14:48:40.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:48:41.191: INFO: namespace pods-2960 deletion completed in 22.648834762s

• [SLOW TEST:41.174 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:48:41.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Jan 27 14:48:41.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6461'
Jan 27 14:48:43.981: INFO: stderr: ""
Jan 27 14:48:43.981: INFO: stdout: "pod/pause created\n"
Jan 27 14:48:43.981: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 27 14:48:43.982: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6461" to be "running and ready"
Jan 27 14:48:43.989: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.893129ms
Jan 27 14:48:45.996: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014334999s
Jan 27 14:48:48.002: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019821962s
Jan 27 14:48:50.085: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103091969s
Jan 27 14:48:52.093: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111370175s
Jan 27 14:48:54.101: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11935707s
Jan 27 14:48:56.111: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 12.129752041s
Jan 27 14:48:56.112: INFO: Pod "pause" satisfied condition "running and ready"
Jan 27 14:48:56.112: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 27 14:48:56.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6461'
Jan 27 14:48:56.274: INFO: stderr: ""
Jan 27 14:48:56.274: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 27 14:48:56.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6461'
Jan 27 14:48:56.415: INFO: stderr: ""
Jan 27 14:48:56.415: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          13s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 27 14:48:56.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6461'
Jan 27 14:48:56.543: INFO: stderr: ""
Jan 27 14:48:56.543: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 27 14:48:56.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6461'
Jan 27 14:48:56.651: INFO: stderr: ""
Jan 27 14:48:56.652: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          13s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Jan 27 14:48:56.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6461'
Jan 27 14:48:56.881: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 27 14:48:56.881: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 27 14:48:56.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6461'
Jan 27 14:48:57.031: INFO: stderr: "No resources found.\n"
Jan 27 14:48:57.031: INFO: stdout: ""
Jan 27 14:48:57.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6461 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 27 14:48:57.121: INFO: stderr: ""
Jan 27 14:48:57.121: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:48:57.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6461" for this suite.
Jan 27 14:49:03.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:49:03.245: INFO: namespace kubectl-6461 deletion completed in 6.119579489s

• [SLOW TEST:22.054 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:49:03.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 27 14:49:17.452: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-fa050718-a047-4a1b-b238-60c11147f3c2,GenerateName:,Namespace:events-9714,SelfLink:/api/v1/namespaces/events-9714/pods/send-events-fa050718-a047-4a1b-b238-60c11147f3c2,UID:6cdde5b6-2714-4c61-8bf5-b2ad94fdcd37,ResourceVersion:22075982,Generation:0,CreationTimestamp:2020-01-27 14:49:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 398880049,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-79nj4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nj4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-79nj4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000890530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000890550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:49:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:49:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:49:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:49:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-27 14:49:03 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-27 14:49:14 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://f3ccb6e4f927d0da8a36af0c9cd7eb315a311aad486e1f7c9b1b5ce6dc9a822c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 27 14:49:19.462: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 27 14:49:21.470: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:49:21.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9714" for this suite.
Jan 27 14:49:59.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:49:59.679: INFO: namespace events-9714 deletion completed in 38.187466807s

• [SLOW TEST:56.433 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:49:59.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 27 14:49:59.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:50:14.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5280" for this suite.
Jan 27 14:51:00.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:51:00.647: INFO: namespace pods-5280 deletion completed in 46.268415547s

• [SLOW TEST:60.967 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:51:00.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9182.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9182.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9182.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9182.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 27 14:51:23.260: INFO: File wheezy_udp@dns-test-service-3.dns-9182.svc.cluster.local from pod  dns-9182/dns-test-4e69fc64-b674-4e24-9a11-f1f97a0d2c3f contains '' instead of 'foo.example.com.'
Jan 27 14:51:23.270: INFO: File jessie_udp@dns-test-service-3.dns-9182.svc.cluster.local from pod  dns-9182/dns-test-4e69fc64-b674-4e24-9a11-f1f97a0d2c3f contains '' instead of 'foo.example.com.'
Jan 27 14:51:23.270: INFO: Lookups using dns-9182/dns-test-4e69fc64-b674-4e24-9a11-f1f97a0d2c3f failed for: [wheezy_udp@dns-test-service-3.dns-9182.svc.cluster.local jessie_udp@dns-test-service-3.dns-9182.svc.cluster.local]

Jan 27 14:51:28.290: INFO: DNS probes using dns-test-4e69fc64-b674-4e24-9a11-f1f97a0d2c3f succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9182.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9182.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9182.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9182.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 27 14:51:54.683: INFO: File jessie_udp@dns-test-service-3.dns-9182.svc.cluster.local from pod  dns-9182/dns-test-33b054d4-ad39-4017-aac8-f2b23ef83efd contains '' instead of 'bar.example.com.'
Jan 27 14:51:54.683: INFO: Lookups using dns-9182/dns-test-33b054d4-ad39-4017-aac8-f2b23ef83efd failed for: [jessie_udp@dns-test-service-3.dns-9182.svc.cluster.local]

Jan 27 14:51:59.705: INFO: DNS probes using dns-test-33b054d4-ad39-4017-aac8-f2b23ef83efd succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9182.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9182.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9182.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9182.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 27 14:52:28.769: INFO: File wheezy_udp@dns-test-service-3.dns-9182.svc.cluster.local from pod  dns-9182/dns-test-01de2c07-2155-465e-9606-e8d42b232ca5 contains '' instead of '10.111.160.64'
Jan 27 14:52:28.773: INFO: File jessie_udp@dns-test-service-3.dns-9182.svc.cluster.local from pod  dns-9182/dns-test-01de2c07-2155-465e-9606-e8d42b232ca5 contains '' instead of '10.111.160.64'
Jan 27 14:52:28.773: INFO: Lookups using dns-9182/dns-test-01de2c07-2155-465e-9606-e8d42b232ca5 failed for: [wheezy_udp@dns-test-service-3.dns-9182.svc.cluster.local jessie_udp@dns-test-service-3.dns-9182.svc.cluster.local]

Jan 27 14:52:33.817: INFO: DNS probes using dns-test-01de2c07-2155-465e-9606-e8d42b232ca5 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:52:34.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9182" for this suite.
Jan 27 14:52:44.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:52:44.298: INFO: namespace dns-9182 deletion completed in 10.13101185s

• [SLOW TEST:103.650 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:52:44.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan 27 14:52:44.493: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1482" to be "success or failure"
Jan 27 14:52:44.511: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.228293ms
Jan 27 14:52:46.521: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028315397s
Jan 27 14:52:48.530: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036716512s
Jan 27 14:52:50.544: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051401395s
Jan 27 14:52:52.558: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064655044s
Jan 27 14:52:54.569: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.075982422s
Jan 27 14:52:56.579: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.085880325s
Jan 27 14:52:58.586: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.092855912s
Jan 27 14:53:00.601: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.108620052s
Jan 27 14:53:02.614: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.12078403s
Jan 27 14:53:04.625: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.131809682s
STEP: Saw pod success
Jan 27 14:53:04.625: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 27 14:53:04.629: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 27 14:53:04.671: INFO: Waiting for pod pod-host-path-test to disappear
Jan 27 14:53:04.763: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:53:04.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-1482" for this suite.
Jan 27 14:53:10.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:53:10.931: INFO: namespace hostpath-1482 deletion completed in 6.159278205s

• [SLOW TEST:26.632 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:53:10.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 27 14:53:11.121: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcd3a9f9-e8f5-4930-9d9e-bdb05b1dee9d" in namespace "projected-6719" to be "success or failure"
Jan 27 14:53:11.305: INFO: Pod "downwardapi-volume-bcd3a9f9-e8f5-4930-9d9e-bdb05b1dee9d": Phase="Pending", Reason="", readiness=false. Elapsed: 183.763659ms
Jan 27 14:53:13.316: INFO: Pod "downwardapi-volume-bcd3a9f9-e8f5-4930-9d9e-bdb05b1dee9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194684887s
Jan 27 14:53:15.326: INFO: Pod "downwardapi-volume-bcd3a9f9-e8f5-4930-9d9e-bdb05b1dee9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204873545s
Jan 27 14:53:17.378: INFO: Pod "downwardapi-volume-bcd3a9f9-e8f5-4930-9d9e-bdb05b1dee9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256926803s
Jan 27 14:53:19.386: INFO: Pod "downwardapi-volume-bcd3a9f9-e8f5-4930-9d9e-bdb05b1dee9d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.264752618s
Jan 27 14:53:21.392: INFO: Pod "downwardapi-volume-bcd3a9f9-e8f5-4930-9d9e-bdb05b1dee9d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.271092648s
Jan 27 14:53:23.420: INFO: Pod "downwardapi-volume-bcd3a9f9-e8f5-4930-9d9e-bdb05b1dee9d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.298971911s
Jan 27 14:53:25.429: INFO: Pod "downwardapi-volume-bcd3a9f9-e8f5-4930-9d9e-bdb05b1dee9d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.307924091s
Jan 27 14:53:27.436: INFO: Pod "downwardapi-volume-bcd3a9f9-e8f5-4930-9d9e-bdb05b1dee9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.314907267s
STEP: Saw pod success
Jan 27 14:53:27.436: INFO: Pod "downwardapi-volume-bcd3a9f9-e8f5-4930-9d9e-bdb05b1dee9d" satisfied condition "success or failure"
Jan 27 14:53:27.440: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bcd3a9f9-e8f5-4930-9d9e-bdb05b1dee9d container client-container: 
STEP: delete the pod
Jan 27 14:53:27.646: INFO: Waiting for pod downwardapi-volume-bcd3a9f9-e8f5-4930-9d9e-bdb05b1dee9d to disappear
Jan 27 14:53:27.666: INFO: Pod downwardapi-volume-bcd3a9f9-e8f5-4930-9d9e-bdb05b1dee9d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:53:27.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6719" for this suite.
Jan 27 14:53:33.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:53:33.869: INFO: namespace projected-6719 deletion completed in 6.192985604s

• [SLOW TEST:22.937 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:53:33.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-1c766ca1-65af-4f08-80dc-9b2b76bd6463
STEP: Creating a pod to test consume secrets
Jan 27 14:53:34.108: INFO: Waiting up to 5m0s for pod "pod-secrets-5d68a711-ce37-4e6d-aca2-84548cd5b744" in namespace "secrets-112" to be "success or failure"
Jan 27 14:53:34.166: INFO: Pod "pod-secrets-5d68a711-ce37-4e6d-aca2-84548cd5b744": Phase="Pending", Reason="", readiness=false. Elapsed: 58.189813ms
Jan 27 14:53:36.180: INFO: Pod "pod-secrets-5d68a711-ce37-4e6d-aca2-84548cd5b744": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072456167s
Jan 27 14:53:38.190: INFO: Pod "pod-secrets-5d68a711-ce37-4e6d-aca2-84548cd5b744": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081879689s
Jan 27 14:53:40.199: INFO: Pod "pod-secrets-5d68a711-ce37-4e6d-aca2-84548cd5b744": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090780538s
Jan 27 14:53:42.231: INFO: Pod "pod-secrets-5d68a711-ce37-4e6d-aca2-84548cd5b744": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123279777s
Jan 27 14:53:44.239: INFO: Pod "pod-secrets-5d68a711-ce37-4e6d-aca2-84548cd5b744": Phase="Pending", Reason="", readiness=false. Elapsed: 10.130977545s
Jan 27 14:53:46.249: INFO: Pod "pod-secrets-5d68a711-ce37-4e6d-aca2-84548cd5b744": Phase="Pending", Reason="", readiness=false. Elapsed: 12.141057876s
Jan 27 14:53:48.258: INFO: Pod "pod-secrets-5d68a711-ce37-4e6d-aca2-84548cd5b744": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.149580461s
STEP: Saw pod success
Jan 27 14:53:48.258: INFO: Pod "pod-secrets-5d68a711-ce37-4e6d-aca2-84548cd5b744" satisfied condition "success or failure"
Jan 27 14:53:48.260: INFO: Trying to get logs from node iruya-node pod pod-secrets-5d68a711-ce37-4e6d-aca2-84548cd5b744 container secret-volume-test: 
STEP: delete the pod
Jan 27 14:53:48.323: INFO: Waiting for pod pod-secrets-5d68a711-ce37-4e6d-aca2-84548cd5b744 to disappear
Jan 27 14:53:48.387: INFO: Pod pod-secrets-5d68a711-ce37-4e6d-aca2-84548cd5b744 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:53:48.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-112" for this suite.
Jan 27 14:53:54.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:53:54.534: INFO: namespace secrets-112 deletion completed in 6.138069509s

• [SLOW TEST:20.663 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:53:54.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-e6f30402-ecb5-4505-8784-0fcaef26f696 in namespace container-probe-4318
Jan 27 14:54:10.795: INFO: Started pod liveness-e6f30402-ecb5-4505-8784-0fcaef26f696 in namespace container-probe-4318
STEP: checking the pod's current state and verifying that restartCount is present
Jan 27 14:54:10.808: INFO: Initial restart count of pod liveness-e6f30402-ecb5-4505-8784-0fcaef26f696 is 0
Jan 27 14:54:31.739: INFO: Restart count of pod container-probe-4318/liveness-e6f30402-ecb5-4505-8784-0fcaef26f696 is now 1 (20.931136281s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:54:31.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4318" for this suite.
Jan 27 14:54:37.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:54:38.013: INFO: namespace container-probe-4318 deletion completed in 6.185538421s

• [SLOW TEST:43.478 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:54:38.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-4664397e-4ecf-479f-875d-34c210e3e1a8
STEP: Creating secret with name secret-projected-all-test-volume-fff22c1d-52a1-47e4-befc-464ccffcb1c3
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 27 14:54:38.267: INFO: Waiting up to 5m0s for pod "projected-volume-3c9c2a59-f058-483b-aa98-4caf1aec3f7d" in namespace "projected-2539" to be "success or failure"
Jan 27 14:54:38.404: INFO: Pod "projected-volume-3c9c2a59-f058-483b-aa98-4caf1aec3f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 136.55735ms
Jan 27 14:54:40.413: INFO: Pod "projected-volume-3c9c2a59-f058-483b-aa98-4caf1aec3f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146148926s
Jan 27 14:54:42.420: INFO: Pod "projected-volume-3c9c2a59-f058-483b-aa98-4caf1aec3f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153388016s
Jan 27 14:54:44.430: INFO: Pod "projected-volume-3c9c2a59-f058-483b-aa98-4caf1aec3f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162803593s
Jan 27 14:54:46.438: INFO: Pod "projected-volume-3c9c2a59-f058-483b-aa98-4caf1aec3f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170428389s
Jan 27 14:54:48.446: INFO: Pod "projected-volume-3c9c2a59-f058-483b-aa98-4caf1aec3f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.179281467s
Jan 27 14:54:50.461: INFO: Pod "projected-volume-3c9c2a59-f058-483b-aa98-4caf1aec3f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.194149913s
Jan 27 14:54:52.497: INFO: Pod "projected-volume-3c9c2a59-f058-483b-aa98-4caf1aec3f7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.229968543s
STEP: Saw pod success
Jan 27 14:54:52.497: INFO: Pod "projected-volume-3c9c2a59-f058-483b-aa98-4caf1aec3f7d" satisfied condition "success or failure"
Jan 27 14:54:52.503: INFO: Trying to get logs from node iruya-node pod projected-volume-3c9c2a59-f058-483b-aa98-4caf1aec3f7d container projected-all-volume-test: 
STEP: delete the pod
Jan 27 14:54:52.836: INFO: Waiting for pod projected-volume-3c9c2a59-f058-483b-aa98-4caf1aec3f7d to disappear
Jan 27 14:54:52.845: INFO: Pod projected-volume-3c9c2a59-f058-483b-aa98-4caf1aec3f7d no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:54:52.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2539" for this suite.
Jan 27 14:54:58.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:54:59.126: INFO: namespace projected-2539 deletion completed in 6.241891898s

• [SLOW TEST:21.112 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:54:59.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 27 14:55:35.528: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8911 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 14:55:35.528: INFO: >>> kubeConfig: /root/.kube/config
I0127 14:55:35.619559       8 log.go:172] (0xc000a306e0) (0xc002444f00) Create stream
I0127 14:55:35.619702       8 log.go:172] (0xc000a306e0) (0xc002444f00) Stream added, broadcasting: 1
I0127 14:55:35.629467       8 log.go:172] (0xc000a306e0) Reply frame received for 1
I0127 14:55:35.629519       8 log.go:172] (0xc000a306e0) (0xc001e9b220) Create stream
I0127 14:55:35.629534       8 log.go:172] (0xc000a306e0) (0xc001e9b220) Stream added, broadcasting: 3
I0127 14:55:35.631863       8 log.go:172] (0xc000a306e0) Reply frame received for 3
I0127 14:55:35.631912       8 log.go:172] (0xc000a306e0) (0xc00194da40) Create stream
I0127 14:55:35.631947       8 log.go:172] (0xc000a306e0) (0xc00194da40) Stream added, broadcasting: 5
I0127 14:55:35.633436       8 log.go:172] (0xc000a306e0) Reply frame received for 5
I0127 14:55:35.750711       8 log.go:172] (0xc000a306e0) Data frame received for 3
I0127 14:55:35.750801       8 log.go:172] (0xc001e9b220) (3) Data frame handling
I0127 14:55:35.750846       8 log.go:172] (0xc001e9b220) (3) Data frame sent
I0127 14:55:35.913451       8 log.go:172] (0xc000a306e0) Data frame received for 1
I0127 14:55:35.913577       8 log.go:172] (0xc002444f00) (1) Data frame handling
I0127 14:55:35.913609       8 log.go:172] (0xc002444f00) (1) Data frame sent
I0127 14:55:35.913629       8 log.go:172] (0xc000a306e0) (0xc00194da40) Stream removed, broadcasting: 5
I0127 14:55:35.913713       8 log.go:172] (0xc000a306e0) (0xc001e9b220) Stream removed, broadcasting: 3
I0127 14:55:35.913794       8 log.go:172] (0xc000a306e0) (0xc002444f00) Stream removed, broadcasting: 1
I0127 14:55:35.913923       8 log.go:172] (0xc000a306e0) Go away received
I0127 14:55:35.914155       8 log.go:172] (0xc000a306e0) (0xc002444f00) Stream removed, broadcasting: 1
I0127 14:55:35.914188       8 log.go:172] (0xc000a306e0) (0xc001e9b220) Stream removed, broadcasting: 3
I0127 14:55:35.914201       8 log.go:172] (0xc000a306e0) (0xc00194da40) Stream removed, broadcasting: 5
Jan 27 14:55:35.914: INFO: Exec stderr: ""
Jan 27 14:55:35.914: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8911 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 14:55:35.914: INFO: >>> kubeConfig: /root/.kube/config
I0127 14:55:36.004260       8 log.go:172] (0xc000d58370) (0xc001e9b5e0) Create stream
I0127 14:55:36.004406       8 log.go:172] (0xc000d58370) (0xc001e9b5e0) Stream added, broadcasting: 1
I0127 14:55:36.014492       8 log.go:172] (0xc000d58370) Reply frame received for 1
I0127 14:55:36.014585       8 log.go:172] (0xc000d58370) (0xc0024aa0a0) Create stream
I0127 14:55:36.014595       8 log.go:172] (0xc000d58370) (0xc0024aa0a0) Stream added, broadcasting: 3
I0127 14:55:36.015953       8 log.go:172] (0xc000d58370) Reply frame received for 3
I0127 14:55:36.015983       8 log.go:172] (0xc000d58370) (0xc00194db80) Create stream
I0127 14:55:36.015992       8 log.go:172] (0xc000d58370) (0xc00194db80) Stream added, broadcasting: 5
I0127 14:55:36.018348       8 log.go:172] (0xc000d58370) Reply frame received for 5
I0127 14:55:36.163786       8 log.go:172] (0xc000d58370) Data frame received for 3
I0127 14:55:36.163940       8 log.go:172] (0xc0024aa0a0) (3) Data frame handling
I0127 14:55:36.163971       8 log.go:172] (0xc0024aa0a0) (3) Data frame sent
I0127 14:55:36.359823       8 log.go:172] (0xc000d58370) (0xc0024aa0a0) Stream removed, broadcasting: 3
I0127 14:55:36.360053       8 log.go:172] (0xc000d58370) Data frame received for 1
I0127 14:55:36.360082       8 log.go:172] (0xc001e9b5e0) (1) Data frame handling
I0127 14:55:36.360114       8 log.go:172] (0xc001e9b5e0) (1) Data frame sent
I0127 14:55:36.360205       8 log.go:172] (0xc000d58370) (0xc001e9b5e0) Stream removed, broadcasting: 1
I0127 14:55:36.360261       8 log.go:172] (0xc000d58370) (0xc00194db80) Stream removed, broadcasting: 5
I0127 14:55:36.360293       8 log.go:172] (0xc000d58370) Go away received
I0127 14:55:36.360518       8 log.go:172] (0xc000d58370) (0xc001e9b5e0) Stream removed, broadcasting: 1
I0127 14:55:36.360533       8 log.go:172] (0xc000d58370) (0xc0024aa0a0) Stream removed, broadcasting: 3
I0127 14:55:36.360568       8 log.go:172] (0xc000d58370) (0xc00194db80) Stream removed, broadcasting: 5
Jan 27 14:55:36.360: INFO: Exec stderr: ""
Jan 27 14:55:36.360: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8911 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 14:55:36.360: INFO: >>> kubeConfig: /root/.kube/config
I0127 14:55:36.423503       8 log.go:172] (0xc000a31550) (0xc002445220) Create stream
I0127 14:55:36.423635       8 log.go:172] (0xc000a31550) (0xc002445220) Stream added, broadcasting: 1
I0127 14:55:36.432394       8 log.go:172] (0xc000a31550) Reply frame received for 1
I0127 14:55:36.432499       8 log.go:172] (0xc000a31550) (0xc001e9b680) Create stream
I0127 14:55:36.432516       8 log.go:172] (0xc000a31550) (0xc001e9b680) Stream added, broadcasting: 3
I0127 14:55:36.434133       8 log.go:172] (0xc000a31550) Reply frame received for 3
I0127 14:55:36.434163       8 log.go:172] (0xc000a31550) (0xc0024aa280) Create stream
I0127 14:55:36.434169       8 log.go:172] (0xc000a31550) (0xc0024aa280) Stream added, broadcasting: 5
I0127 14:55:36.436036       8 log.go:172] (0xc000a31550) Reply frame received for 5
I0127 14:55:36.688371       8 log.go:172] (0xc000a31550) Data frame received for 3
I0127 14:55:36.688687       8 log.go:172] (0xc001e9b680) (3) Data frame handling
I0127 14:55:36.688776       8 log.go:172] (0xc001e9b680) (3) Data frame sent
I0127 14:55:36.916221       8 log.go:172] (0xc000a31550) Data frame received for 1
I0127 14:55:36.916296       8 log.go:172] (0xc002445220) (1) Data frame handling
I0127 14:55:36.916318       8 log.go:172] (0xc002445220) (1) Data frame sent
I0127 14:55:36.916575       8 log.go:172] (0xc000a31550) (0xc002445220) Stream removed, broadcasting: 1
I0127 14:55:36.916633       8 log.go:172] (0xc000a31550) (0xc0024aa280) Stream removed, broadcasting: 5
I0127 14:55:36.916669       8 log.go:172] (0xc000a31550) (0xc001e9b680) Stream removed, broadcasting: 3
I0127 14:55:36.916683       8 log.go:172] (0xc000a31550) Go away received
I0127 14:55:36.916912       8 log.go:172] (0xc000a31550) (0xc002445220) Stream removed, broadcasting: 1
I0127 14:55:36.916929       8 log.go:172] (0xc000a31550) (0xc001e9b680) Stream removed, broadcasting: 3
I0127 14:55:36.916939       8 log.go:172] (0xc000a31550) (0xc0024aa280) Stream removed, broadcasting: 5
Jan 27 14:55:36.916: INFO: Exec stderr: ""
Jan 27 14:55:36.917: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8911 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 14:55:36.917: INFO: >>> kubeConfig: /root/.kube/config
I0127 14:55:36.990227       8 log.go:172] (0xc0010e60b0) (0xc002445540) Create stream
I0127 14:55:36.990319       8 log.go:172] (0xc0010e60b0) (0xc002445540) Stream added, broadcasting: 1
I0127 14:55:37.001323       8 log.go:172] (0xc0010e60b0) Reply frame received for 1
I0127 14:55:37.001429       8 log.go:172] (0xc0010e60b0) (0xc0024aa3c0) Create stream
I0127 14:55:37.001441       8 log.go:172] (0xc0010e60b0) (0xc0024aa3c0) Stream added, broadcasting: 3
I0127 14:55:37.004100       8 log.go:172] (0xc0010e60b0) Reply frame received for 3
I0127 14:55:37.004289       8 log.go:172] (0xc0010e60b0) (0xc00194dc20) Create stream
I0127 14:55:37.004325       8 log.go:172] (0xc0010e60b0) (0xc00194dc20) Stream added, broadcasting: 5
I0127 14:55:37.008086       8 log.go:172] (0xc0010e60b0) Reply frame received for 5
I0127 14:55:37.153389       8 log.go:172] (0xc0010e60b0) Data frame received for 3
I0127 14:55:37.153459       8 log.go:172] (0xc0024aa3c0) (3) Data frame handling
I0127 14:55:37.153478       8 log.go:172] (0xc0024aa3c0) (3) Data frame sent
I0127 14:55:37.283690       8 log.go:172] (0xc0010e60b0) Data frame received for 1
I0127 14:55:37.283778       8 log.go:172] (0xc002445540) (1) Data frame handling
I0127 14:55:37.283798       8 log.go:172] (0xc002445540) (1) Data frame sent
I0127 14:55:37.284180       8 log.go:172] (0xc0010e60b0) (0xc002445540) Stream removed, broadcasting: 1
I0127 14:55:37.284286       8 log.go:172] (0xc0010e60b0) (0xc0024aa3c0) Stream removed, broadcasting: 3
I0127 14:55:37.284316       8 log.go:172] (0xc0010e60b0) (0xc00194dc20) Stream removed, broadcasting: 5
I0127 14:55:37.284353       8 log.go:172] (0xc0010e60b0) Go away received
I0127 14:55:37.284443       8 log.go:172] (0xc0010e60b0) (0xc002445540) Stream removed, broadcasting: 1
I0127 14:55:37.284457       8 log.go:172] (0xc0010e60b0) (0xc0024aa3c0) Stream removed, broadcasting: 3
I0127 14:55:37.284466       8 log.go:172] (0xc0010e60b0) (0xc00194dc20) Stream removed, broadcasting: 5
Jan 27 14:55:37.284: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 27 14:55:37.284: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8911 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 14:55:37.284: INFO: >>> kubeConfig: /root/.kube/config
I0127 14:55:37.354654       8 log.go:172] (0xc0010e6bb0) (0xc002445860) Create stream
I0127 14:55:37.354736       8 log.go:172] (0xc0010e6bb0) (0xc002445860) Stream added, broadcasting: 1
I0127 14:55:37.366349       8 log.go:172] (0xc0010e6bb0) Reply frame received for 1
I0127 14:55:37.366429       8 log.go:172] (0xc0010e6bb0) (0xc002445900) Create stream
I0127 14:55:37.366440       8 log.go:172] (0xc0010e6bb0) (0xc002445900) Stream added, broadcasting: 3
I0127 14:55:37.370316       8 log.go:172] (0xc0010e6bb0) Reply frame received for 3
I0127 14:55:37.370413       8 log.go:172] (0xc0010e6bb0) (0xc001e9b720) Create stream
I0127 14:55:37.370422       8 log.go:172] (0xc0010e6bb0) (0xc001e9b720) Stream added, broadcasting: 5
I0127 14:55:37.372892       8 log.go:172] (0xc0010e6bb0) Reply frame received for 5
I0127 14:55:37.506290       8 log.go:172] (0xc0010e6bb0) Data frame received for 3
I0127 14:55:37.506321       8 log.go:172] (0xc002445900) (3) Data frame handling
I0127 14:55:37.506329       8 log.go:172] (0xc002445900) (3) Data frame sent
I0127 14:55:37.611505       8 log.go:172] (0xc0010e6bb0) Data frame received for 1
I0127 14:55:37.611561       8 log.go:172] (0xc0010e6bb0) (0xc002445900) Stream removed, broadcasting: 3
I0127 14:55:37.611597       8 log.go:172] (0xc002445860) (1) Data frame handling
I0127 14:55:37.611614       8 log.go:172] (0xc002445860) (1) Data frame sent
I0127 14:55:37.611633       8 log.go:172] (0xc0010e6bb0) (0xc002445860) Stream removed, broadcasting: 1
I0127 14:55:37.611884       8 log.go:172] (0xc0010e6bb0) (0xc001e9b720) Stream removed, broadcasting: 5
I0127 14:55:37.611914       8 log.go:172] (0xc0010e6bb0) (0xc002445860) Stream removed, broadcasting: 1
I0127 14:55:37.611924       8 log.go:172] (0xc0010e6bb0) (0xc002445900) Stream removed, broadcasting: 3
I0127 14:55:37.611933       8 log.go:172] (0xc0010e6bb0) (0xc001e9b720) Stream removed, broadcasting: 5
Jan 27 14:55:37.611: INFO: Exec stderr: ""
Jan 27 14:55:37.612: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8911 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 14:55:37.612: INFO: >>> kubeConfig: /root/.kube/config
I0127 14:55:37.612071       8 log.go:172] (0xc0010e6bb0) Go away received
I0127 14:55:37.695228       8 log.go:172] (0xc0010e7340) (0xc002445a40) Create stream
I0127 14:55:37.695259       8 log.go:172] (0xc0010e7340) (0xc002445a40) Stream added, broadcasting: 1
I0127 14:55:37.701025       8 log.go:172] (0xc0010e7340) Reply frame received for 1
I0127 14:55:37.701051       8 log.go:172] (0xc0010e7340) (0xc002445ae0) Create stream
I0127 14:55:37.701060       8 log.go:172] (0xc0010e7340) (0xc002445ae0) Stream added, broadcasting: 3
I0127 14:55:37.702124       8 log.go:172] (0xc0010e7340) Reply frame received for 3
I0127 14:55:37.702140       8 log.go:172] (0xc0010e7340) (0xc002445b80) Create stream
I0127 14:55:37.702146       8 log.go:172] (0xc0010e7340) (0xc002445b80) Stream added, broadcasting: 5
I0127 14:55:37.703402       8 log.go:172] (0xc0010e7340) Reply frame received for 5
I0127 14:55:37.771852       8 log.go:172] (0xc0010e7340) Data frame received for 3
I0127 14:55:37.771914       8 log.go:172] (0xc002445ae0) (3) Data frame handling
I0127 14:55:37.771943       8 log.go:172] (0xc002445ae0) (3) Data frame sent
I0127 14:55:37.894766       8 log.go:172] (0xc0010e7340) (0xc002445ae0) Stream removed, broadcasting: 3
I0127 14:55:37.894904       8 log.go:172] (0xc0010e7340) Data frame received for 1
I0127 14:55:37.894918       8 log.go:172] (0xc002445a40) (1) Data frame handling
I0127 14:55:37.894932       8 log.go:172] (0xc002445a40) (1) Data frame sent
I0127 14:55:37.895000       8 log.go:172] (0xc0010e7340) (0xc002445a40) Stream removed, broadcasting: 1
I0127 14:55:37.895041       8 log.go:172] (0xc0010e7340) (0xc002445b80) Stream removed, broadcasting: 5
I0127 14:55:37.895070       8 log.go:172] (0xc0010e7340) Go away received
I0127 14:55:37.895253       8 log.go:172] (0xc0010e7340) (0xc002445a40) Stream removed, broadcasting: 1
I0127 14:55:37.895278       8 log.go:172] (0xc0010e7340) (0xc002445ae0) Stream removed, broadcasting: 3
I0127 14:55:37.895294       8 log.go:172] (0xc0010e7340) (0xc002445b80) Stream removed, broadcasting: 5
Jan 27 14:55:37.895: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 27 14:55:37.895: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8911 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 14:55:37.895: INFO: >>> kubeConfig: /root/.kube/config
I0127 14:55:37.959205       8 log.go:172] (0xc001462370) (0xc002445ea0) Create stream
I0127 14:55:37.959324       8 log.go:172] (0xc001462370) (0xc002445ea0) Stream added, broadcasting: 1
I0127 14:55:37.966320       8 log.go:172] (0xc001462370) Reply frame received for 1
I0127 14:55:37.966382       8 log.go:172] (0xc001462370) (0xc00194dcc0) Create stream
I0127 14:55:37.966389       8 log.go:172] (0xc001462370) (0xc00194dcc0) Stream added, broadcasting: 3
I0127 14:55:37.967553       8 log.go:172] (0xc001462370) Reply frame received for 3
I0127 14:55:37.967590       8 log.go:172] (0xc001462370) (0xc001f92000) Create stream
I0127 14:55:37.967599       8 log.go:172] (0xc001462370) (0xc001f92000) Stream added, broadcasting: 5
I0127 14:55:37.968489       8 log.go:172] (0xc001462370) Reply frame received for 5
I0127 14:55:38.151670       8 log.go:172] (0xc001462370) Data frame received for 3
I0127 14:55:38.151768       8 log.go:172] (0xc00194dcc0) (3) Data frame handling
I0127 14:55:38.151815       8 log.go:172] (0xc00194dcc0) (3) Data frame sent
I0127 14:55:38.255946       8 log.go:172] (0xc001462370) Data frame received for 1
I0127 14:55:38.256099       8 log.go:172] (0xc001462370) (0xc001f92000) Stream removed, broadcasting: 5
I0127 14:55:38.256159       8 log.go:172] (0xc002445ea0) (1) Data frame handling
I0127 14:55:38.256173       8 log.go:172] (0xc002445ea0) (1) Data frame sent
I0127 14:55:38.256199       8 log.go:172] (0xc001462370) (0xc00194dcc0) Stream removed, broadcasting: 3
I0127 14:55:38.256237       8 log.go:172] (0xc001462370) (0xc002445ea0) Stream removed, broadcasting: 1
I0127 14:55:38.256245       8 log.go:172] (0xc001462370) Go away received
I0127 14:55:38.256907       8 log.go:172] (0xc001462370) (0xc002445ea0) Stream removed, broadcasting: 1
I0127 14:55:38.256979       8 log.go:172] (0xc001462370) (0xc00194dcc0) Stream removed, broadcasting: 3
I0127 14:55:38.257004       8 log.go:172] (0xc001462370) (0xc001f92000) Stream removed, broadcasting: 5
Jan 27 14:55:38.257: INFO: Exec stderr: ""
Jan 27 14:55:38.257: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8911 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 14:55:38.257: INFO: >>> kubeConfig: /root/.kube/config
I0127 14:55:38.311348       8 log.go:172] (0xc001590630) (0xc001f926e0) Create stream
I0127 14:55:38.311385       8 log.go:172] (0xc001590630) (0xc001f926e0) Stream added, broadcasting: 1
I0127 14:55:38.315728       8 log.go:172] (0xc001590630) Reply frame received for 1
I0127 14:55:38.315770       8 log.go:172] (0xc001590630) (0xc0024aa460) Create stream
I0127 14:55:38.315777       8 log.go:172] (0xc001590630) (0xc0024aa460) Stream added, broadcasting: 3
I0127 14:55:38.321194       8 log.go:172] (0xc001590630) Reply frame received for 3
I0127 14:55:38.321233       8 log.go:172] (0xc001590630) (0xc002445f40) Create stream
I0127 14:55:38.321251       8 log.go:172] (0xc001590630) (0xc002445f40) Stream added, broadcasting: 5
I0127 14:55:38.322623       8 log.go:172] (0xc001590630) Reply frame received for 5
I0127 14:55:38.404703       8 log.go:172] (0xc001590630) Data frame received for 3
I0127 14:55:38.404931       8 log.go:172] (0xc0024aa460) (3) Data frame handling
I0127 14:55:38.404985       8 log.go:172] (0xc0024aa460) (3) Data frame sent
I0127 14:55:38.568063       8 log.go:172] (0xc001590630) (0xc0024aa460) Stream removed, broadcasting: 3
I0127 14:55:38.568263       8 log.go:172] (0xc001590630) Data frame received for 1
I0127 14:55:38.568273       8 log.go:172] (0xc001f926e0) (1) Data frame handling
I0127 14:55:38.568286       8 log.go:172] (0xc001f926e0) (1) Data frame sent
I0127 14:55:38.568335       8 log.go:172] (0xc001590630) (0xc001f926e0) Stream removed, broadcasting: 1
I0127 14:55:38.568540       8 log.go:172] (0xc001590630) (0xc002445f40) Stream removed, broadcasting: 5
I0127 14:55:38.568651       8 log.go:172] (0xc001590630) Go away received
I0127 14:55:38.568956       8 log.go:172] (0xc001590630) (0xc001f926e0) Stream removed, broadcasting: 1
I0127 14:55:38.569015       8 log.go:172] (0xc001590630) (0xc0024aa460) Stream removed, broadcasting: 3
I0127 14:55:38.569028       8 log.go:172] (0xc001590630) (0xc002445f40) Stream removed, broadcasting: 5
Jan 27 14:55:38.569: INFO: Exec stderr: ""
Jan 27 14:55:38.569: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8911 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 14:55:38.569: INFO: >>> kubeConfig: /root/.kube/config
I0127 14:55:38.619371       8 log.go:172] (0xc001463a20) (0xc003246280) Create stream
I0127 14:55:38.619426       8 log.go:172] (0xc001463a20) (0xc003246280) Stream added, broadcasting: 1
I0127 14:55:38.623230       8 log.go:172] (0xc001463a20) Reply frame received for 1
I0127 14:55:38.623265       8 log.go:172] (0xc001463a20) (0xc003246320) Create stream
I0127 14:55:38.623278       8 log.go:172] (0xc001463a20) (0xc003246320) Stream added, broadcasting: 3
I0127 14:55:38.625233       8 log.go:172] (0xc001463a20) Reply frame received for 3
I0127 14:55:38.625254       8 log.go:172] (0xc001463a20) (0xc0024aa500) Create stream
I0127 14:55:38.625261       8 log.go:172] (0xc001463a20) (0xc0024aa500) Stream added, broadcasting: 5
I0127 14:55:38.626342       8 log.go:172] (0xc001463a20) Reply frame received for 5
I0127 14:55:38.775480       8 log.go:172] (0xc001463a20) Data frame received for 3
I0127 14:55:38.775602       8 log.go:172] (0xc003246320) (3) Data frame handling
I0127 14:55:38.775624       8 log.go:172] (0xc003246320) (3) Data frame sent
I0127 14:55:38.875186       8 log.go:172] (0xc001463a20) (0xc003246320) Stream removed, broadcasting: 3
I0127 14:55:38.875269       8 log.go:172] (0xc001463a20) Data frame received for 1
I0127 14:55:38.875288       8 log.go:172] (0xc003246280) (1) Data frame handling
I0127 14:55:38.875323       8 log.go:172] (0xc003246280) (1) Data frame sent
I0127 14:55:38.875393       8 log.go:172] (0xc001463a20) (0xc0024aa500) Stream removed, broadcasting: 5
I0127 14:55:38.875425       8 log.go:172] (0xc001463a20) (0xc003246280) Stream removed, broadcasting: 1
I0127 14:55:38.875455       8 log.go:172] (0xc001463a20) Go away received
I0127 14:55:38.875625       8 log.go:172] (0xc001463a20) (0xc003246280) Stream removed, broadcasting: 1
I0127 14:55:38.875662       8 log.go:172] (0xc001463a20) (0xc003246320) Stream removed, broadcasting: 3
I0127 14:55:38.875677       8 log.go:172] (0xc001463a20) (0xc0024aa500) Stream removed, broadcasting: 5
Jan 27 14:55:38.875: INFO: Exec stderr: ""
Jan 27 14:55:38.875: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8911 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 14:55:38.875: INFO: >>> kubeConfig: /root/.kube/config
I0127 14:55:38.933306       8 log.go:172] (0xc001590fd0) (0xc001f92be0) Create stream
I0127 14:55:38.933384       8 log.go:172] (0xc001590fd0) (0xc001f92be0) Stream added, broadcasting: 1
I0127 14:55:38.939699       8 log.go:172] (0xc001590fd0) Reply frame received for 1
I0127 14:55:38.939746       8 log.go:172] (0xc001590fd0) (0xc001f92c80) Create stream
I0127 14:55:38.939756       8 log.go:172] (0xc001590fd0) (0xc001f92c80) Stream added, broadcasting: 3
I0127 14:55:38.941354       8 log.go:172] (0xc001590fd0) Reply frame received for 3
I0127 14:55:38.941378       8 log.go:172] (0xc001590fd0) (0xc0032463c0) Create stream
I0127 14:55:38.941386       8 log.go:172] (0xc001590fd0) (0xc0032463c0) Stream added, broadcasting: 5
I0127 14:55:38.942275       8 log.go:172] (0xc001590fd0) Reply frame received for 5
I0127 14:55:39.032786       8 log.go:172] (0xc001590fd0) Data frame received for 3
I0127 14:55:39.032871       8 log.go:172] (0xc001f92c80) (3) Data frame handling
I0127 14:55:39.032894       8 log.go:172] (0xc001f92c80) (3) Data frame sent
I0127 14:55:39.139710       8 log.go:172] (0xc001590fd0) Data frame received for 1
I0127 14:55:39.139825       8 log.go:172] (0xc001f92be0) (1) Data frame handling
I0127 14:55:39.139867       8 log.go:172] (0xc001f92be0) (1) Data frame sent
I0127 14:55:39.139878       8 log.go:172] (0xc001590fd0) (0xc001f92be0) Stream removed, broadcasting: 1
I0127 14:55:39.141017       8 log.go:172] (0xc001590fd0) (0xc0032463c0) Stream removed, broadcasting: 5
I0127 14:55:39.141108       8 log.go:172] (0xc001590fd0) (0xc001f92c80) Stream removed, broadcasting: 3
I0127 14:55:39.141141       8 log.go:172] (0xc001590fd0) Go away received
I0127 14:55:39.141170       8 log.go:172] (0xc001590fd0) (0xc001f92be0) Stream removed, broadcasting: 1
I0127 14:55:39.141183       8 log.go:172] (0xc001590fd0) (0xc001f92c80) Stream removed, broadcasting: 3
I0127 14:55:39.141193       8 log.go:172] (0xc001590fd0) (0xc0032463c0) Stream removed, broadcasting: 5
Jan 27 14:55:39.141: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:55:39.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-8911" for this suite.
Jan 27 14:56:41.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:56:41.434: INFO: namespace e2e-kubelet-etc-hosts-8911 deletion completed in 1m2.283777146s

• [SLOW TEST:102.308 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:56:41.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 27 14:56:41.610: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58affe83-b447-4a28-9de7-4531137e642e" in namespace "projected-5802" to be "success or failure"
Jan 27 14:56:41.726: INFO: Pod "downwardapi-volume-58affe83-b447-4a28-9de7-4531137e642e": Phase="Pending", Reason="", readiness=false. Elapsed: 116.283998ms
Jan 27 14:56:43.735: INFO: Pod "downwardapi-volume-58affe83-b447-4a28-9de7-4531137e642e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124902938s
Jan 27 14:56:45.741: INFO: Pod "downwardapi-volume-58affe83-b447-4a28-9de7-4531137e642e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13141934s
Jan 27 14:56:47.753: INFO: Pod "downwardapi-volume-58affe83-b447-4a28-9de7-4531137e642e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143434965s
Jan 27 14:56:49.768: INFO: Pod "downwardapi-volume-58affe83-b447-4a28-9de7-4531137e642e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158341988s
Jan 27 14:56:51.778: INFO: Pod "downwardapi-volume-58affe83-b447-4a28-9de7-4531137e642e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.168219036s
Jan 27 14:56:53.792: INFO: Pod "downwardapi-volume-58affe83-b447-4a28-9de7-4531137e642e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.182495136s
Jan 27 14:56:55.800: INFO: Pod "downwardapi-volume-58affe83-b447-4a28-9de7-4531137e642e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.189822206s
STEP: Saw pod success
Jan 27 14:56:55.800: INFO: Pod "downwardapi-volume-58affe83-b447-4a28-9de7-4531137e642e" satisfied condition "success or failure"
Jan 27 14:56:55.803: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-58affe83-b447-4a28-9de7-4531137e642e container client-container: 
STEP: delete the pod
Jan 27 14:56:55.964: INFO: Waiting for pod downwardapi-volume-58affe83-b447-4a28-9de7-4531137e642e to disappear
Jan 27 14:56:55.982: INFO: Pod downwardapi-volume-58affe83-b447-4a28-9de7-4531137e642e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:56:55.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5802" for this suite.
Jan 27 14:57:02.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:57:02.137: INFO: namespace projected-5802 deletion completed in 6.147845578s

• [SLOW TEST:20.700 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:57:02.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 27 14:57:02.525: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 27 14:57:07.532: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 27 14:57:15.586: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 27 14:57:17.596: INFO: Creating deployment "test-rollover-deployment"
Jan 27 14:57:17.734: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 27 14:57:19.755: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 27 14:57:19.766: INFO: Ensure that both replica sets have 1 created replica
Jan 27 14:57:19.775: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 27 14:57:19.800: INFO: Updating deployment test-rollover-deployment
Jan 27 14:57:19.800: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 27 14:57:21.988: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 27 14:57:21.996: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 27 14:57:22.001: INFO: all replica sets need to contain the pod-template-hash label
Jan 27 14:57:22.001: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733840, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733837, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:57:24.016: INFO: all replica sets need to contain the pod-template-hash label
Jan 27 14:57:24.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733840, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733837, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:57:26.018: INFO: all replica sets need to contain the pod-template-hash label
Jan 27 14:57:26.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733840, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733837, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:57:28.014: INFO: all replica sets need to contain the pod-template-hash label
Jan 27 14:57:28.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733840, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733837, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:57:30.014: INFO: all replica sets need to contain the pod-template-hash label
Jan 27 14:57:30.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733840, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733837, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:57:32.021: INFO: all replica sets need to contain the pod-template-hash label
Jan 27 14:57:32.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733840, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733837, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:57:34.018: INFO: all replica sets need to contain the pod-template-hash label
Jan 27 14:57:34.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733840, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733837, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:57:36.030: INFO: all replica sets need to contain the pod-template-hash label
Jan 27 14:57:36.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733840, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733837, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:57:38.013: INFO: all replica sets need to contain the pod-template-hash label
Jan 27 14:57:38.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733856, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733837, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:57:40.014: INFO: all replica sets need to contain the pod-template-hash label
Jan 27 14:57:40.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733856, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733837, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:57:42.021: INFO: all replica sets need to contain the pod-template-hash label
Jan 27 14:57:42.022: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733856, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733837, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:57:44.015: INFO: all replica sets need to contain the pod-template-hash label
Jan 27 14:57:44.015: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733856, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733837, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:57:46.050: INFO: all replica sets need to contain the pod-template-hash label
Jan 27 14:57:46.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733838, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733856, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715733837, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 14:57:48.105: INFO: 
Jan 27 14:57:48.105: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 27 14:57:48.121: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3604,SelfLink:/apis/apps/v1/namespaces/deployment-3604/deployments/test-rollover-deployment,UID:43469f0a-1a28-42ac-b360-5bac0cc30b56,ResourceVersion:22077101,Generation:2,CreationTimestamp:2020-01-27 14:57:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-27 14:57:18 +0000 UTC 2020-01-27 14:57:18 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-27 14:57:46 +0000 UTC 2020-01-27 14:57:17 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 27 14:57:48.125: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3604,SelfLink:/apis/apps/v1/namespaces/deployment-3604/replicasets/test-rollover-deployment-854595fc44,UID:817ba60e-2a20-4f8c-a79a-6c073b2e15d2,ResourceVersion:22077090,Generation:2,CreationTimestamp:2020-01-27 14:57:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 43469f0a-1a28-42ac-b360-5bac0cc30b56 0xc002ffc427 0xc002ffc428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 27 14:57:48.125: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 27 14:57:48.126: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3604,SelfLink:/apis/apps/v1/namespaces/deployment-3604/replicasets/test-rollover-controller,UID:4c277394-fc66-4ebf-8282-4b25bbc54f90,ResourceVersion:22077099,Generation:2,CreationTimestamp:2020-01-27 14:57:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 43469f0a-1a28-42ac-b360-5bac0cc30b56 0xc002ffc357 0xc002ffc358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 27 14:57:48.126: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3604,SelfLink:/apis/apps/v1/namespaces/deployment-3604/replicasets/test-rollover-deployment-9b8b997cf,UID:2065f0ac-f1fc-47dc-b23c-845ad31f3453,ResourceVersion:22077043,Generation:2,CreationTimestamp:2020-01-27 14:57:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 43469f0a-1a28-42ac-b360-5bac0cc30b56 0xc002ffc4f0 0xc002ffc4f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 27 14:57:48.131: INFO: Pod "test-rollover-deployment-854595fc44-sgl4z" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-sgl4z,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3604,SelfLink:/api/v1/namespaces/deployment-3604/pods/test-rollover-deployment-854595fc44-sgl4z,UID:7b688e19-9c01-45fb-894d-ec62e4363052,ResourceVersion:22077074,Generation:0,CreationTimestamp:2020-01-27 14:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 817ba60e-2a20-4f8c-a79a-6c073b2e15d2 0xc00005b707 0xc00005b708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4ncx9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4ncx9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-4ncx9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00005b810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00005b830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:57:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:57:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:57:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 14:57:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-27 14:57:21 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-27 14:57:34 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://db50e1676d9d43638f9e33859e3c43e715702d03a69ced53b670b5de3a36fb92}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:57:48.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3604" for this suite.
Jan 27 14:57:56.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:57:56.356: INFO: namespace deployment-3604 deletion completed in 8.219224373s

• [SLOW TEST:54.218 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:57:56.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 27 14:57:56.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-3903'
Jan 27 14:57:56.661: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 27 14:57:56.661: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Jan 27 14:58:00.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3903'
Jan 27 14:58:00.964: INFO: stderr: ""
Jan 27 14:58:00.964: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:58:00.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3903" for this suite.
Jan 27 14:58:21.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:58:21.185: INFO: namespace kubectl-3903 deletion completed in 20.175789756s

• [SLOW TEST:24.829 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:58:21.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-9e24e14f-8b18-4ef9-9018-b900d0368590
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:58:21.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2506" for this suite.
Jan 27 14:58:27.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:58:27.663: INFO: namespace configmap-2506 deletion completed in 6.266884275s

• [SLOW TEST:6.477 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:58:27.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jan 27 14:58:27.892: INFO: Waiting up to 5m0s for pod "client-containers-96bb843f-657a-4b6c-9f29-00d306b80612" in namespace "containers-1114" to be "success or failure"
Jan 27 14:58:27.898: INFO: Pod "client-containers-96bb843f-657a-4b6c-9f29-00d306b80612": Phase="Pending", Reason="", readiness=false. Elapsed: 6.512686ms
Jan 27 14:58:29.911: INFO: Pod "client-containers-96bb843f-657a-4b6c-9f29-00d306b80612": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019166816s
Jan 27 14:58:31.920: INFO: Pod "client-containers-96bb843f-657a-4b6c-9f29-00d306b80612": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028090956s
Jan 27 14:58:33.936: INFO: Pod "client-containers-96bb843f-657a-4b6c-9f29-00d306b80612": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044407613s
Jan 27 14:58:35.949: INFO: Pod "client-containers-96bb843f-657a-4b6c-9f29-00d306b80612": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057039028s
Jan 27 14:58:37.959: INFO: Pod "client-containers-96bb843f-657a-4b6c-9f29-00d306b80612": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067223267s
Jan 27 14:58:39.969: INFO: Pod "client-containers-96bb843f-657a-4b6c-9f29-00d306b80612": Phase="Pending", Reason="", readiness=false. Elapsed: 12.077364982s
Jan 27 14:58:41.984: INFO: Pod "client-containers-96bb843f-657a-4b6c-9f29-00d306b80612": Phase="Pending", Reason="", readiness=false. Elapsed: 14.092407645s
Jan 27 14:58:44.457: INFO: Pod "client-containers-96bb843f-657a-4b6c-9f29-00d306b80612": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.565535545s
STEP: Saw pod success
Jan 27 14:58:44.458: INFO: Pod "client-containers-96bb843f-657a-4b6c-9f29-00d306b80612" satisfied condition "success or failure"
Jan 27 14:58:44.473: INFO: Trying to get logs from node iruya-node pod client-containers-96bb843f-657a-4b6c-9f29-00d306b80612 container test-container: 
STEP: delete the pod
Jan 27 14:58:45.379: INFO: Waiting for pod client-containers-96bb843f-657a-4b6c-9f29-00d306b80612 to disappear
Jan 27 14:58:45.392: INFO: Pod client-containers-96bb843f-657a-4b6c-9f29-00d306b80612 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:58:45.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1114" for this suite.
Jan 27 14:58:51.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:58:51.524: INFO: namespace containers-1114 deletion completed in 6.124329386s

• [SLOW TEST:23.861 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:58:51.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 27 14:58:51.724: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74edabc7-a4e7-4825-b9f8-4a1d01c04580" in namespace "projected-2405" to be "success or failure"
Jan 27 14:58:51.735: INFO: Pod "downwardapi-volume-74edabc7-a4e7-4825-b9f8-4a1d01c04580": Phase="Pending", Reason="", readiness=false. Elapsed: 10.404217ms
Jan 27 14:58:53.744: INFO: Pod "downwardapi-volume-74edabc7-a4e7-4825-b9f8-4a1d01c04580": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019903036s
Jan 27 14:58:55.759: INFO: Pod "downwardapi-volume-74edabc7-a4e7-4825-b9f8-4a1d01c04580": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035103007s
Jan 27 14:58:57.769: INFO: Pod "downwardapi-volume-74edabc7-a4e7-4825-b9f8-4a1d01c04580": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044701021s
Jan 27 14:58:59.781: INFO: Pod "downwardapi-volume-74edabc7-a4e7-4825-b9f8-4a1d01c04580": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05702414s
Jan 27 14:59:01.793: INFO: Pod "downwardapi-volume-74edabc7-a4e7-4825-b9f8-4a1d01c04580": Phase="Running", Reason="", readiness=true. Elapsed: 10.069361743s
Jan 27 14:59:03.812: INFO: Pod "downwardapi-volume-74edabc7-a4e7-4825-b9f8-4a1d01c04580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.087400642s
STEP: Saw pod success
Jan 27 14:59:03.812: INFO: Pod "downwardapi-volume-74edabc7-a4e7-4825-b9f8-4a1d01c04580" satisfied condition "success or failure"
Jan 27 14:59:03.827: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-74edabc7-a4e7-4825-b9f8-4a1d01c04580 container client-container: 
STEP: delete the pod
Jan 27 14:59:04.011: INFO: Waiting for pod downwardapi-volume-74edabc7-a4e7-4825-b9f8-4a1d01c04580 to disappear
Jan 27 14:59:04.022: INFO: Pod downwardapi-volume-74edabc7-a4e7-4825-b9f8-4a1d01c04580 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:59:04.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2405" for this suite.
Jan 27 14:59:10.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:59:10.179: INFO: namespace projected-2405 deletion completed in 6.148997295s

• [SLOW TEST:18.654 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:59:10.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 27 14:59:10.320: INFO: Create a RollingUpdate DaemonSet
Jan 27 14:59:10.329: INFO: Check that daemon pods launch on every node of the cluster
Jan 27 14:59:10.350: INFO: Number of nodes with available pods: 0
Jan 27 14:59:10.350: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:59:12.916: INFO: Number of nodes with available pods: 0
Jan 27 14:59:12.916: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:59:14.685: INFO: Number of nodes with available pods: 0
Jan 27 14:59:14.685: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:59:15.375: INFO: Number of nodes with available pods: 0
Jan 27 14:59:15.375: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:59:16.479: INFO: Number of nodes with available pods: 0
Jan 27 14:59:16.479: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:59:18.174: INFO: Number of nodes with available pods: 0
Jan 27 14:59:18.174: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:59:18.999: INFO: Number of nodes with available pods: 0
Jan 27 14:59:18.999: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:59:19.492: INFO: Number of nodes with available pods: 0
Jan 27 14:59:19.492: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:59:20.424: INFO: Number of nodes with available pods: 0
Jan 27 14:59:20.424: INFO: Node iruya-node is running more than one daemon pod
Jan 27 14:59:21.363: INFO: Number of nodes with available pods: 1
Jan 27 14:59:21.363: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 27 14:59:22.410: INFO: Number of nodes with available pods: 2
Jan 27 14:59:22.410: INFO: Number of running nodes: 2, number of available pods: 2
Jan 27 14:59:22.410: INFO: Update the DaemonSet to trigger a rollout
Jan 27 14:59:22.421: INFO: Updating DaemonSet daemon-set
Jan 27 14:59:38.452: INFO: Roll back the DaemonSet before rollout is complete
Jan 27 14:59:38.468: INFO: Updating DaemonSet daemon-set
Jan 27 14:59:38.468: INFO: Make sure DaemonSet rollback is complete
Jan 27 14:59:38.473: INFO: Wrong image for pod: daemon-set-mlqz4. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 27 14:59:38.473: INFO: Pod daemon-set-mlqz4 is not available
Jan 27 14:59:40.102: INFO: Wrong image for pod: daemon-set-mlqz4. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 27 14:59:40.102: INFO: Pod daemon-set-mlqz4 is not available
Jan 27 14:59:40.880: INFO: Wrong image for pod: daemon-set-mlqz4. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 27 14:59:40.880: INFO: Pod daemon-set-mlqz4 is not available
Jan 27 14:59:41.926: INFO: Wrong image for pod: daemon-set-mlqz4. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 27 14:59:41.926: INFO: Pod daemon-set-mlqz4 is not available
Jan 27 14:59:42.975: INFO: Wrong image for pod: daemon-set-mlqz4. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 27 14:59:42.975: INFO: Pod daemon-set-mlqz4 is not available
Jan 27 14:59:44.583: INFO: Pod daemon-set-blhjn is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6476, will wait for the garbage collector to delete the pods
Jan 27 14:59:44.671: INFO: Deleting DaemonSet.extensions daemon-set took: 9.553988ms
Jan 27 14:59:45.071: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.449654ms
Jan 27 14:59:51.038: INFO: Number of nodes with available pods: 0
Jan 27 14:59:51.038: INFO: Number of running nodes: 0, number of available pods: 0
Jan 27 14:59:51.047: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6476/daemonsets","resourceVersion":"22077451"},"items":null}

Jan 27 14:59:51.051: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6476/pods","resourceVersion":"22077451"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 14:59:51.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6476" for this suite.
Jan 27 14:59:57.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 14:59:57.201: INFO: namespace daemonsets-6476 deletion completed in 6.135252203s

• [SLOW TEST:47.022 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 14:59:57.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-01590614-7137-48ad-a90a-66f07f78a0fe
STEP: Creating a pod to test consume secrets
Jan 27 14:59:57.279: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-546a78a0-cff7-4e22-b6dc-201fb0b259e0" in namespace "projected-7910" to be "success or failure"
Jan 27 14:59:57.287: INFO: Pod "pod-projected-secrets-546a78a0-cff7-4e22-b6dc-201fb0b259e0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.250123ms
Jan 27 14:59:59.302: INFO: Pod "pod-projected-secrets-546a78a0-cff7-4e22-b6dc-201fb0b259e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021977141s
Jan 27 15:00:01.311: INFO: Pod "pod-projected-secrets-546a78a0-cff7-4e22-b6dc-201fb0b259e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031384559s
Jan 27 15:00:03.316: INFO: Pod "pod-projected-secrets-546a78a0-cff7-4e22-b6dc-201fb0b259e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036860243s
Jan 27 15:00:05.327: INFO: Pod "pod-projected-secrets-546a78a0-cff7-4e22-b6dc-201fb0b259e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047297187s
Jan 27 15:00:07.343: INFO: Pod "pod-projected-secrets-546a78a0-cff7-4e22-b6dc-201fb0b259e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063617122s
STEP: Saw pod success
Jan 27 15:00:07.343: INFO: Pod "pod-projected-secrets-546a78a0-cff7-4e22-b6dc-201fb0b259e0" satisfied condition "success or failure"
Jan 27 15:00:07.351: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-546a78a0-cff7-4e22-b6dc-201fb0b259e0 container projected-secret-volume-test: 
STEP: delete the pod
Jan 27 15:00:07.519: INFO: Waiting for pod pod-projected-secrets-546a78a0-cff7-4e22-b6dc-201fb0b259e0 to disappear
Jan 27 15:00:07.525: INFO: Pod pod-projected-secrets-546a78a0-cff7-4e22-b6dc-201fb0b259e0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:00:07.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7910" for this suite.
Jan 27 15:00:13.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:00:13.702: INFO: namespace projected-7910 deletion completed in 6.168826642s

• [SLOW TEST:16.501 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:00:13.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 27 15:00:13.783: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan 27 15:00:17.006: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:00:17.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7418" for this suite.
Jan 27 15:00:25.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:00:26.020: INFO: namespace replication-controller-7418 deletion completed in 8.44785499s

• [SLOW TEST:12.317 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:00:26.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 27 15:00:27.866: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6203,SelfLink:/api/v1/namespaces/watch-6203/configmaps/e2e-watch-test-resource-version,UID:13ce078c-93a9-4b49-9816-3152a2ec43eb,ResourceVersion:22077611,Generation:0,CreationTimestamp:2020-01-27 15:00:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 27 15:00:27.866: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6203,SelfLink:/api/v1/namespaces/watch-6203/configmaps/e2e-watch-test-resource-version,UID:13ce078c-93a9-4b49-9816-3152a2ec43eb,ResourceVersion:22077612,Generation:0,CreationTimestamp:2020-01-27 15:00:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:00:27.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6203" for this suite.
Jan 27 15:00:34.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:00:34.303: INFO: namespace watch-6203 deletion completed in 6.42137507s

• [SLOW TEST:8.283 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:00:34.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 27 15:00:34.428: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:00:35.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6659" for this suite.
Jan 27 15:00:41.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:00:41.680: INFO: namespace custom-resource-definition-6659 deletion completed in 6.156357126s

• [SLOW TEST:7.375 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:00:41.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jan 27 15:00:41.857: INFO: Waiting up to 5m0s for pod "client-containers-386c22a5-f67d-4fc9-beac-9f3a26d6cd2b" in namespace "containers-7981" to be "success or failure"
Jan 27 15:00:41.886: INFO: Pod "client-containers-386c22a5-f67d-4fc9-beac-9f3a26d6cd2b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.378398ms
Jan 27 15:00:43.901: INFO: Pod "client-containers-386c22a5-f67d-4fc9-beac-9f3a26d6cd2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043385672s
Jan 27 15:00:45.911: INFO: Pod "client-containers-386c22a5-f67d-4fc9-beac-9f3a26d6cd2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053640933s
Jan 27 15:00:47.919: INFO: Pod "client-containers-386c22a5-f67d-4fc9-beac-9f3a26d6cd2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062030974s
Jan 27 15:00:49.929: INFO: Pod "client-containers-386c22a5-f67d-4fc9-beac-9f3a26d6cd2b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071912895s
Jan 27 15:00:51.941: INFO: Pod "client-containers-386c22a5-f67d-4fc9-beac-9f3a26d6cd2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083894661s
STEP: Saw pod success
Jan 27 15:00:51.941: INFO: Pod "client-containers-386c22a5-f67d-4fc9-beac-9f3a26d6cd2b" satisfied condition "success or failure"
Jan 27 15:00:51.954: INFO: Trying to get logs from node iruya-node pod client-containers-386c22a5-f67d-4fc9-beac-9f3a26d6cd2b container test-container: 
STEP: delete the pod
Jan 27 15:00:52.020: INFO: Waiting for pod client-containers-386c22a5-f67d-4fc9-beac-9f3a26d6cd2b to disappear
Jan 27 15:00:52.029: INFO: Pod client-containers-386c22a5-f67d-4fc9-beac-9f3a26d6cd2b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:00:52.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7981" for this suite.
Jan 27 15:00:58.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:00:58.442: INFO: namespace containers-7981 deletion completed in 6.313765224s

• [SLOW TEST:16.762 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:00:58.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 27 15:00:58.604: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b87bd9e-9f2c-47b6-a5b2-82dca4583d54" in namespace "downward-api-6293" to be "success or failure"
Jan 27 15:00:58.613: INFO: Pod "downwardapi-volume-3b87bd9e-9f2c-47b6-a5b2-82dca4583d54": Phase="Pending", Reason="", readiness=false. Elapsed: 8.587123ms
Jan 27 15:01:00.620: INFO: Pod "downwardapi-volume-3b87bd9e-9f2c-47b6-a5b2-82dca4583d54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015948224s
Jan 27 15:01:02.635: INFO: Pod "downwardapi-volume-3b87bd9e-9f2c-47b6-a5b2-82dca4583d54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030205741s
Jan 27 15:01:04.653: INFO: Pod "downwardapi-volume-3b87bd9e-9f2c-47b6-a5b2-82dca4583d54": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049001905s
Jan 27 15:01:07.075: INFO: Pod "downwardapi-volume-3b87bd9e-9f2c-47b6-a5b2-82dca4583d54": Phase="Pending", Reason="", readiness=false. Elapsed: 8.470415194s
Jan 27 15:01:09.099: INFO: Pod "downwardapi-volume-3b87bd9e-9f2c-47b6-a5b2-82dca4583d54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.4945412s
STEP: Saw pod success
Jan 27 15:01:09.099: INFO: Pod "downwardapi-volume-3b87bd9e-9f2c-47b6-a5b2-82dca4583d54" satisfied condition "success or failure"
Jan 27 15:01:09.108: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3b87bd9e-9f2c-47b6-a5b2-82dca4583d54 container client-container: 
STEP: delete the pod
Jan 27 15:01:09.220: INFO: Waiting for pod downwardapi-volume-3b87bd9e-9f2c-47b6-a5b2-82dca4583d54 to disappear
Jan 27 15:01:09.225: INFO: Pod downwardapi-volume-3b87bd9e-9f2c-47b6-a5b2-82dca4583d54 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:01:09.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6293" for this suite.
Jan 27 15:01:15.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:01:15.419: INFO: namespace downward-api-6293 deletion completed in 6.182842988s

• [SLOW TEST:16.976 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:01:15.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Jan 27 15:01:15.483: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix739759910/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:01:15.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6374" for this suite.
Jan 27 15:01:21.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:01:21.813: INFO: namespace kubectl-6374 deletion completed in 6.212723559s

• [SLOW TEST:6.394 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:01:21.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 27 15:01:21.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 27 15:01:22.129: INFO: stderr: ""
Jan 27 15:01:22.129: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:01:22.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-197" for this suite.
Jan 27 15:01:28.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:01:28.359: INFO: namespace kubectl-197 deletion completed in 6.219935908s

• [SLOW TEST:6.546 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:01:28.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 27 15:01:28.521: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 27 15:01:28.545: INFO: Number of nodes with available pods: 0
Jan 27 15:01:28.545: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:01:30.298: INFO: Number of nodes with available pods: 0
Jan 27 15:01:30.298: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:01:30.873: INFO: Number of nodes with available pods: 0
Jan 27 15:01:30.873: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:01:31.595: INFO: Number of nodes with available pods: 0
Jan 27 15:01:31.596: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:01:32.574: INFO: Number of nodes with available pods: 0
Jan 27 15:01:32.574: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:01:33.569: INFO: Number of nodes with available pods: 0
Jan 27 15:01:33.569: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:01:34.938: INFO: Number of nodes with available pods: 0
Jan 27 15:01:34.938: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:01:35.630: INFO: Number of nodes with available pods: 0
Jan 27 15:01:35.630: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:01:36.920: INFO: Number of nodes with available pods: 0
Jan 27 15:01:36.920: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:01:37.583: INFO: Number of nodes with available pods: 0
Jan 27 15:01:37.583: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:01:38.568: INFO: Number of nodes with available pods: 0
Jan 27 15:01:38.568: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:01:39.563: INFO: Number of nodes with available pods: 2
Jan 27 15:01:39.563: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 27 15:01:39.743: INFO: Wrong image for pod: daemon-set-2p884. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:39.743: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:40.786: INFO: Wrong image for pod: daemon-set-2p884. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:40.786: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:41.781: INFO: Wrong image for pod: daemon-set-2p884. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:41.781: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:43.089: INFO: Wrong image for pod: daemon-set-2p884. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:43.089: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:43.799: INFO: Wrong image for pod: daemon-set-2p884. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:43.799: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:44.785: INFO: Wrong image for pod: daemon-set-2p884. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:44.785: INFO: Pod daemon-set-2p884 is not available
Jan 27 15:01:44.785: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:45.790: INFO: Wrong image for pod: daemon-set-2p884. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:45.790: INFO: Pod daemon-set-2p884 is not available
Jan 27 15:01:45.790: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:46.780: INFO: Wrong image for pod: daemon-set-2p884. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:46.780: INFO: Pod daemon-set-2p884 is not available
Jan 27 15:01:46.780: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:47.783: INFO: Wrong image for pod: daemon-set-2p884. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:47.783: INFO: Pod daemon-set-2p884 is not available
Jan 27 15:01:47.783: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:48.779: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:48.779: INFO: Pod daemon-set-qp82r is not available
Jan 27 15:01:49.785: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:49.785: INFO: Pod daemon-set-qp82r is not available
Jan 27 15:01:50.788: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:50.788: INFO: Pod daemon-set-qp82r is not available
Jan 27 15:01:52.088: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:52.088: INFO: Pod daemon-set-qp82r is not available
Jan 27 15:01:52.856: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:52.856: INFO: Pod daemon-set-qp82r is not available
Jan 27 15:01:53.790: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:53.790: INFO: Pod daemon-set-qp82r is not available
Jan 27 15:01:54.785: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:55.799: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:56.778: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:57.784: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:58.779: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:59.781: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:01:59.781: INFO: Pod daemon-set-bq9tq is not available
Jan 27 15:02:00.781: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:02:00.781: INFO: Pod daemon-set-bq9tq is not available
Jan 27 15:02:01.783: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:02:01.783: INFO: Pod daemon-set-bq9tq is not available
Jan 27 15:02:02.786: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:02:02.786: INFO: Pod daemon-set-bq9tq is not available
Jan 27 15:02:03.793: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:02:03.793: INFO: Pod daemon-set-bq9tq is not available
Jan 27 15:02:04.780: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:02:04.780: INFO: Pod daemon-set-bq9tq is not available
Jan 27 15:02:05.780: INFO: Wrong image for pod: daemon-set-bq9tq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 27 15:02:05.780: INFO: Pod daemon-set-bq9tq is not available
Jan 27 15:02:06.787: INFO: Pod daemon-set-4vbnl is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 27 15:02:06.822: INFO: Number of nodes with available pods: 1
Jan 27 15:02:06.822: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:02:07.842: INFO: Number of nodes with available pods: 1
Jan 27 15:02:07.842: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:02:08.852: INFO: Number of nodes with available pods: 1
Jan 27 15:02:08.852: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:02:09.837: INFO: Number of nodes with available pods: 1
Jan 27 15:02:09.837: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:02:10.860: INFO: Number of nodes with available pods: 1
Jan 27 15:02:10.860: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:02:11.874: INFO: Number of nodes with available pods: 1
Jan 27 15:02:11.874: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:02:12.856: INFO: Number of nodes with available pods: 1
Jan 27 15:02:12.856: INFO: Node iruya-node is running more than one daemon pod
Jan 27 15:02:13.846: INFO: Number of nodes with available pods: 2
Jan 27 15:02:13.846: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3054, will wait for the garbage collector to delete the pods
Jan 27 15:02:14.001: INFO: Deleting DaemonSet.extensions daemon-set took: 22.457269ms
Jan 27 15:02:14.302: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.34209ms
Jan 27 15:02:27.911: INFO: Number of nodes with available pods: 0
Jan 27 15:02:27.911: INFO: Number of running nodes: 0, number of available pods: 0
Jan 27 15:02:27.953: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3054/daemonsets","resourceVersion":"22077936"},"items":null}

Jan 27 15:02:27.957: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3054/pods","resourceVersion":"22077936"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:02:27.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3054" for this suite.
Jan 27 15:02:34.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:02:34.078: INFO: namespace daemonsets-3054 deletion completed in 6.101855054s

• [SLOW TEST:65.717 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:02:34.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 27 15:02:34.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3833'
Jan 27 15:02:36.153: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 27 15:02:36.153: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan 27 15:02:36.217: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-2r2x5]
Jan 27 15:02:36.218: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-2r2x5" in namespace "kubectl-3833" to be "running and ready"
Jan 27 15:02:36.284: INFO: Pod "e2e-test-nginx-rc-2r2x5": Phase="Pending", Reason="", readiness=false. Elapsed: 66.635313ms
Jan 27 15:02:38.293: INFO: Pod "e2e-test-nginx-rc-2r2x5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075934789s
Jan 27 15:02:40.343: INFO: Pod "e2e-test-nginx-rc-2r2x5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125635752s
Jan 27 15:02:42.352: INFO: Pod "e2e-test-nginx-rc-2r2x5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134402164s
Jan 27 15:02:44.361: INFO: Pod "e2e-test-nginx-rc-2r2x5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1429792s
Jan 27 15:02:46.370: INFO: Pod "e2e-test-nginx-rc-2r2x5": Phase="Running", Reason="", readiness=true. Elapsed: 10.15235029s
Jan 27 15:02:46.370: INFO: Pod "e2e-test-nginx-rc-2r2x5" satisfied condition "running and ready"
Jan 27 15:02:46.370: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-2r2x5]
Jan 27 15:02:46.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-3833'
Jan 27 15:02:46.612: INFO: stderr: ""
Jan 27 15:02:46.612: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Jan 27 15:02:46.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3833'
Jan 27 15:02:46.767: INFO: stderr: ""
Jan 27 15:02:46.767: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:02:46.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3833" for this suite.
Jan 27 15:03:08.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:03:08.904: INFO: namespace kubectl-3833 deletion completed in 22.133347572s

• [SLOW TEST:34.826 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:03:08.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 27 15:03:09.042: INFO: Waiting up to 5m0s for pod "downwardapi-volume-473c9f20-e936-4430-9225-cec0ea45b156" in namespace "projected-4335" to be "success or failure"
Jan 27 15:03:09.048: INFO: Pod "downwardapi-volume-473c9f20-e936-4430-9225-cec0ea45b156": Phase="Pending", Reason="", readiness=false. Elapsed: 5.377396ms
Jan 27 15:03:11.055: INFO: Pod "downwardapi-volume-473c9f20-e936-4430-9225-cec0ea45b156": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013108543s
Jan 27 15:03:13.073: INFO: Pod "downwardapi-volume-473c9f20-e936-4430-9225-cec0ea45b156": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030929098s
Jan 27 15:03:15.086: INFO: Pod "downwardapi-volume-473c9f20-e936-4430-9225-cec0ea45b156": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043486438s
Jan 27 15:03:17.093: INFO: Pod "downwardapi-volume-473c9f20-e936-4430-9225-cec0ea45b156": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051221227s
Jan 27 15:03:19.101: INFO: Pod "downwardapi-volume-473c9f20-e936-4430-9225-cec0ea45b156": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059271s
STEP: Saw pod success
Jan 27 15:03:19.102: INFO: Pod "downwardapi-volume-473c9f20-e936-4430-9225-cec0ea45b156" satisfied condition "success or failure"
Jan 27 15:03:19.107: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-473c9f20-e936-4430-9225-cec0ea45b156 container client-container: 
STEP: delete the pod
Jan 27 15:03:19.176: INFO: Waiting for pod downwardapi-volume-473c9f20-e936-4430-9225-cec0ea45b156 to disappear
Jan 27 15:03:19.184: INFO: Pod downwardapi-volume-473c9f20-e936-4430-9225-cec0ea45b156 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:03:19.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4335" for this suite.
Jan 27 15:03:25.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:03:25.438: INFO: namespace projected-4335 deletion completed in 6.24852707s

• [SLOW TEST:16.533 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:03:25.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:04:25.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9975" for this suite.
Jan 27 15:04:47.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:04:47.738: INFO: namespace container-probe-9975 deletion completed in 22.141757612s

• [SLOW TEST:82.300 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:04:47.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 27 15:04:47.862: INFO: Waiting up to 5m0s for pod "downwardapi-volume-418b177a-30a0-4140-a28d-45c6dd0ac5bc" in namespace "projected-4165" to be "success or failure"
Jan 27 15:04:47.923: INFO: Pod "downwardapi-volume-418b177a-30a0-4140-a28d-45c6dd0ac5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 61.226083ms
Jan 27 15:04:49.931: INFO: Pod "downwardapi-volume-418b177a-30a0-4140-a28d-45c6dd0ac5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069632155s
Jan 27 15:04:51.943: INFO: Pod "downwardapi-volume-418b177a-30a0-4140-a28d-45c6dd0ac5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080883889s
Jan 27 15:04:53.954: INFO: Pod "downwardapi-volume-418b177a-30a0-4140-a28d-45c6dd0ac5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092074104s
Jan 27 15:04:55.982: INFO: Pod "downwardapi-volume-418b177a-30a0-4140-a28d-45c6dd0ac5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119835903s
Jan 27 15:04:57.988: INFO: Pod "downwardapi-volume-418b177a-30a0-4140-a28d-45c6dd0ac5bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.126259149s
STEP: Saw pod success
Jan 27 15:04:57.988: INFO: Pod "downwardapi-volume-418b177a-30a0-4140-a28d-45c6dd0ac5bc" satisfied condition "success or failure"
Jan 27 15:04:57.992: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-418b177a-30a0-4140-a28d-45c6dd0ac5bc container client-container: 
STEP: delete the pod
Jan 27 15:04:58.041: INFO: Waiting for pod downwardapi-volume-418b177a-30a0-4140-a28d-45c6dd0ac5bc to disappear
Jan 27 15:04:58.053: INFO: Pod downwardapi-volume-418b177a-30a0-4140-a28d-45c6dd0ac5bc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:04:58.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4165" for this suite.
Jan 27 15:05:04.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:05:04.394: INFO: namespace projected-4165 deletion completed in 6.153818604s

• [SLOW TEST:16.656 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:05:04.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Jan 27 15:05:04.498: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:05:04.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1858" for this suite.
Jan 27 15:05:10.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:05:10.784: INFO: namespace kubectl-1858 deletion completed in 6.158952452s

• [SLOW TEST:6.389 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:05:10.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:05:16.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4905" for this suite.
Jan 27 15:05:22.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:05:22.561: INFO: namespace watch-4905 deletion completed in 6.209967337s

• [SLOW TEST:11.777 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:05:22.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-6743
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6743 to expose endpoints map[]
Jan 27 15:05:22.750: INFO: Get endpoints failed (11.895862ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 27 15:05:23.756: INFO: successfully validated that service multi-endpoint-test in namespace services-6743 exposes endpoints map[] (1.017651586s elapsed)
STEP: Creating pod pod1 in namespace services-6743
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6743 to expose endpoints map[pod1:[100]]
Jan 27 15:05:27.996: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.192353078s elapsed, will retry)
Jan 27 15:05:33.074: INFO: successfully validated that service multi-endpoint-test in namespace services-6743 exposes endpoints map[pod1:[100]] (9.271067326s elapsed)
STEP: Creating pod pod2 in namespace services-6743
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6743 to expose endpoints map[pod1:[100] pod2:[101]]
Jan 27 15:05:39.066: INFO: Unexpected endpoints: found map[3f1e9150-104c-4a78-baaf-a181cdcabd1e:[100]], expected map[pod1:[100] pod2:[101]] (5.984220285s elapsed, will retry)
Jan 27 15:05:42.127: INFO: successfully validated that service multi-endpoint-test in namespace services-6743 exposes endpoints map[pod1:[100] pod2:[101]] (9.045322126s elapsed)
STEP: Deleting pod pod1 in namespace services-6743
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6743 to expose endpoints map[pod2:[101]]
Jan 27 15:05:43.219: INFO: successfully validated that service multi-endpoint-test in namespace services-6743 exposes endpoints map[pod2:[101]] (1.081331593s elapsed)
STEP: Deleting pod pod2 in namespace services-6743
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6743 to expose endpoints map[]
Jan 27 15:05:44.250: INFO: successfully validated that service multi-endpoint-test in namespace services-6743 exposes endpoints map[] (1.023660391s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:05:44.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6743" for this suite.
Jan 27 15:06:06.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:06:06.947: INFO: namespace services-6743 deletion completed in 22.174366501s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:44.386 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:06:06.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 27 15:06:07.001: INFO: Waiting up to 5m0s for pod "pod-e41b51c8-01b8-4785-9ff1-6b630ad2c083" in namespace "emptydir-3411" to be "success or failure"
Jan 27 15:06:07.098: INFO: Pod "pod-e41b51c8-01b8-4785-9ff1-6b630ad2c083": Phase="Pending", Reason="", readiness=false. Elapsed: 96.474541ms
Jan 27 15:06:09.104: INFO: Pod "pod-e41b51c8-01b8-4785-9ff1-6b630ad2c083": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102486358s
Jan 27 15:06:11.111: INFO: Pod "pod-e41b51c8-01b8-4785-9ff1-6b630ad2c083": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109889865s
Jan 27 15:06:13.142: INFO: Pod "pod-e41b51c8-01b8-4785-9ff1-6b630ad2c083": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140333691s
Jan 27 15:06:15.149: INFO: Pod "pod-e41b51c8-01b8-4785-9ff1-6b630ad2c083": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.147204989s
STEP: Saw pod success
Jan 27 15:06:15.149: INFO: Pod "pod-e41b51c8-01b8-4785-9ff1-6b630ad2c083" satisfied condition "success or failure"
Jan 27 15:06:15.152: INFO: Trying to get logs from node iruya-node pod pod-e41b51c8-01b8-4785-9ff1-6b630ad2c083 container test-container: 
STEP: delete the pod
Jan 27 15:06:15.286: INFO: Waiting for pod pod-e41b51c8-01b8-4785-9ff1-6b630ad2c083 to disappear
Jan 27 15:06:15.304: INFO: Pod pod-e41b51c8-01b8-4785-9ff1-6b630ad2c083 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:06:15.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3411" for this suite.
Jan 27 15:06:21.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:06:21.529: INFO: namespace emptydir-3411 deletion completed in 6.215583561s

• [SLOW TEST:14.581 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:06:21.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-312f3e27-f58b-4c64-8a5b-eb0d26704897
STEP: Creating secret with name s-test-opt-upd-a8fe4b78-233f-4a5a-bfe9-6fdb7d9ef1e9
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-312f3e27-f58b-4c64-8a5b-eb0d26704897
STEP: Updating secret s-test-opt-upd-a8fe4b78-233f-4a5a-bfe9-6fdb7d9ef1e9
STEP: Creating secret with name s-test-opt-create-30a47ede-b7d7-4fdc-959e-3d4f8f82cdf0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:07:45.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1642" for this suite.
Jan 27 15:08:07.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:08:07.769: INFO: namespace secrets-1642 deletion completed in 22.12638579s

• [SLOW TEST:106.240 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:08:07.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 27 15:08:07.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3226'
Jan 27 15:08:08.085: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 27 15:08:08.085: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan 27 15:08:08.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-3226'
Jan 27 15:08:08.361: INFO: stderr: ""
Jan 27 15:08:08.362: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:08:08.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3226" for this suite.
Jan 27 15:08:20.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:08:20.536: INFO: namespace kubectl-3226 deletion completed in 12.166168265s

• [SLOW TEST:12.767 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:08:20.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5255
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 27 15:08:20.639: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 27 15:09:00.946: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5255 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 15:09:00.946: INFO: >>> kubeConfig: /root/.kube/config
I0127 15:09:01.012935       8 log.go:172] (0xc0015304d0) (0xc003246960) Create stream
I0127 15:09:01.013031       8 log.go:172] (0xc0015304d0) (0xc003246960) Stream added, broadcasting: 1
I0127 15:09:01.021965       8 log.go:172] (0xc0015304d0) Reply frame received for 1
I0127 15:09:01.022031       8 log.go:172] (0xc0015304d0) (0xc001f934a0) Create stream
I0127 15:09:01.022053       8 log.go:172] (0xc0015304d0) (0xc001f934a0) Stream added, broadcasting: 3
I0127 15:09:01.024786       8 log.go:172] (0xc0015304d0) Reply frame received for 3
I0127 15:09:01.024824       8 log.go:172] (0xc0015304d0) (0xc001f935e0) Create stream
I0127 15:09:01.024843       8 log.go:172] (0xc0015304d0) (0xc001f935e0) Stream added, broadcasting: 5
I0127 15:09:01.027037       8 log.go:172] (0xc0015304d0) Reply frame received for 5
I0127 15:09:02.185346       8 log.go:172] (0xc0015304d0) Data frame received for 3
I0127 15:09:02.185481       8 log.go:172] (0xc001f934a0) (3) Data frame handling
I0127 15:09:02.185514       8 log.go:172] (0xc001f934a0) (3) Data frame sent
I0127 15:09:02.377568       8 log.go:172] (0xc0015304d0) Data frame received for 1
I0127 15:09:02.377933       8 log.go:172] (0xc003246960) (1) Data frame handling
I0127 15:09:02.377986       8 log.go:172] (0xc003246960) (1) Data frame sent
I0127 15:09:02.378023       8 log.go:172] (0xc0015304d0) (0xc003246960) Stream removed, broadcasting: 1
I0127 15:09:02.379364       8 log.go:172] (0xc0015304d0) (0xc001f934a0) Stream removed, broadcasting: 3
I0127 15:09:02.379426       8 log.go:172] (0xc0015304d0) (0xc001f935e0) Stream removed, broadcasting: 5
I0127 15:09:02.379450       8 log.go:172] (0xc0015304d0) Go away received
I0127 15:09:02.379600       8 log.go:172] (0xc0015304d0) (0xc003246960) Stream removed, broadcasting: 1
I0127 15:09:02.379748       8 log.go:172] (0xc0015304d0) (0xc001f934a0) Stream removed, broadcasting: 3
I0127 15:09:02.379765       8 log.go:172] (0xc0015304d0) (0xc001f935e0) Stream removed, broadcasting: 5
Jan 27 15:09:02.379: INFO: Found all expected endpoints: [netserver-0]
Jan 27 15:09:02.389: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5255 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 15:09:02.389: INFO: >>> kubeConfig: /root/.kube/config
I0127 15:09:02.444674       8 log.go:172] (0xc00332b4a0) (0xc001f939a0) Create stream
I0127 15:09:02.444809       8 log.go:172] (0xc00332b4a0) (0xc001f939a0) Stream added, broadcasting: 1
I0127 15:09:02.452681       8 log.go:172] (0xc00332b4a0) Reply frame received for 1
I0127 15:09:02.452712       8 log.go:172] (0xc00332b4a0) (0xc0024aa8c0) Create stream
I0127 15:09:02.452721       8 log.go:172] (0xc00332b4a0) (0xc0024aa8c0) Stream added, broadcasting: 3
I0127 15:09:02.454502       8 log.go:172] (0xc00332b4a0) Reply frame received for 3
I0127 15:09:02.454532       8 log.go:172] (0xc00332b4a0) (0xc001676e60) Create stream
I0127 15:09:02.454568       8 log.go:172] (0xc00332b4a0) (0xc001676e60) Stream added, broadcasting: 5
I0127 15:09:02.456646       8 log.go:172] (0xc00332b4a0) Reply frame received for 5
I0127 15:09:03.621728       8 log.go:172] (0xc00332b4a0) Data frame received for 3
I0127 15:09:03.621815       8 log.go:172] (0xc0024aa8c0) (3) Data frame handling
I0127 15:09:03.621838       8 log.go:172] (0xc0024aa8c0) (3) Data frame sent
I0127 15:09:03.796587       8 log.go:172] (0xc00332b4a0) (0xc0024aa8c0) Stream removed, broadcasting: 3
I0127 15:09:03.796937       8 log.go:172] (0xc00332b4a0) Data frame received for 1
I0127 15:09:03.796964       8 log.go:172] (0xc001f939a0) (1) Data frame handling
I0127 15:09:03.796990       8 log.go:172] (0xc001f939a0) (1) Data frame sent
I0127 15:09:03.797017       8 log.go:172] (0xc00332b4a0) (0xc001f939a0) Stream removed, broadcasting: 1
I0127 15:09:03.797040       8 log.go:172] (0xc00332b4a0) (0xc001676e60) Stream removed, broadcasting: 5
I0127 15:09:03.797078       8 log.go:172] (0xc00332b4a0) Go away received
I0127 15:09:03.797370       8 log.go:172] (0xc00332b4a0) (0xc001f939a0) Stream removed, broadcasting: 1
I0127 15:09:03.797512       8 log.go:172] (0xc00332b4a0) (0xc0024aa8c0) Stream removed, broadcasting: 3
I0127 15:09:03.797562       8 log.go:172] (0xc00332b4a0) (0xc001676e60) Stream removed, broadcasting: 5
Jan 27 15:09:03.797: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:09:03.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5255" for this suite.
Jan 27 15:09:25.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:09:25.985: INFO: namespace pod-network-test-5255 deletion completed in 22.176980501s

• [SLOW TEST:65.448 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:09:25.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan 27 15:09:26.060: INFO: Waiting up to 5m0s for pod "client-containers-66b7cfa4-b475-4654-a735-9fe164e3d441" in namespace "containers-2780" to be "success or failure"
Jan 27 15:09:26.087: INFO: Pod "client-containers-66b7cfa4-b475-4654-a735-9fe164e3d441": Phase="Pending", Reason="", readiness=false. Elapsed: 27.239385ms
Jan 27 15:09:28.096: INFO: Pod "client-containers-66b7cfa4-b475-4654-a735-9fe164e3d441": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035764252s
Jan 27 15:09:30.106: INFO: Pod "client-containers-66b7cfa4-b475-4654-a735-9fe164e3d441": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046302042s
Jan 27 15:09:32.114: INFO: Pod "client-containers-66b7cfa4-b475-4654-a735-9fe164e3d441": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054148444s
Jan 27 15:09:34.146: INFO: Pod "client-containers-66b7cfa4-b475-4654-a735-9fe164e3d441": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085390777s
Jan 27 15:09:36.158: INFO: Pod "client-containers-66b7cfa4-b475-4654-a735-9fe164e3d441": Phase="Running", Reason="", readiness=true. Elapsed: 10.097621334s
Jan 27 15:09:38.166: INFO: Pod "client-containers-66b7cfa4-b475-4654-a735-9fe164e3d441": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.105922773s
STEP: Saw pod success
Jan 27 15:09:38.166: INFO: Pod "client-containers-66b7cfa4-b475-4654-a735-9fe164e3d441" satisfied condition "success or failure"
Jan 27 15:09:38.176: INFO: Trying to get logs from node iruya-node pod client-containers-66b7cfa4-b475-4654-a735-9fe164e3d441 container test-container: 
STEP: delete the pod
Jan 27 15:09:38.424: INFO: Waiting for pod client-containers-66b7cfa4-b475-4654-a735-9fe164e3d441 to disappear
Jan 27 15:09:38.428: INFO: Pod client-containers-66b7cfa4-b475-4654-a735-9fe164e3d441 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:09:38.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2780" for this suite.
Jan 27 15:09:44.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:09:44.775: INFO: namespace containers-2780 deletion completed in 6.342675773s

• [SLOW TEST:18.790 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:09:44.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jan 27 15:09:44.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 27 15:09:45.049: INFO: stderr: ""
Jan 27 15:09:45.050: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:09:45.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1680" for this suite.
Jan 27 15:09:51.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:09:51.189: INFO: namespace kubectl-1680 deletion completed in 6.122295364s

• [SLOW TEST:6.413 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:09:51.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 27 15:09:59.988: INFO: Successfully updated pod "labelsupdate970ab9ff-ec3b-4ad7-85bb-da60b1cbbcb4"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:10:02.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4439" for this suite.
Jan 27 15:10:24.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:10:24.257: INFO: namespace projected-4439 deletion completed in 22.193128077s

• [SLOW TEST:33.068 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:10:24.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 27 15:10:24.375: INFO: Waiting up to 5m0s for pod "pod-f7a165e6-9685-431b-835f-0baf112ddbaf" in namespace "emptydir-1742" to be "success or failure"
Jan 27 15:10:24.380: INFO: Pod "pod-f7a165e6-9685-431b-835f-0baf112ddbaf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.198289ms
Jan 27 15:10:26.392: INFO: Pod "pod-f7a165e6-9685-431b-835f-0baf112ddbaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016865967s
Jan 27 15:10:28.398: INFO: Pod "pod-f7a165e6-9685-431b-835f-0baf112ddbaf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023165969s
Jan 27 15:10:30.408: INFO: Pod "pod-f7a165e6-9685-431b-835f-0baf112ddbaf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033274191s
Jan 27 15:10:32.416: INFO: Pod "pod-f7a165e6-9685-431b-835f-0baf112ddbaf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040782746s
Jan 27 15:10:34.424: INFO: Pod "pod-f7a165e6-9685-431b-835f-0baf112ddbaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049057746s
STEP: Saw pod success
Jan 27 15:10:34.424: INFO: Pod "pod-f7a165e6-9685-431b-835f-0baf112ddbaf" satisfied condition "success or failure"
Jan 27 15:10:34.429: INFO: Trying to get logs from node iruya-node pod pod-f7a165e6-9685-431b-835f-0baf112ddbaf container test-container: 
STEP: delete the pod
Jan 27 15:10:34.585: INFO: Waiting for pod pod-f7a165e6-9685-431b-835f-0baf112ddbaf to disappear
Jan 27 15:10:34.608: INFO: Pod pod-f7a165e6-9685-431b-835f-0baf112ddbaf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:10:34.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1742" for this suite.
Jan 27 15:10:40.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:10:40.784: INFO: namespace emptydir-1742 deletion completed in 6.158758276s

• [SLOW TEST:16.526 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:10:40.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 27 15:10:50.059: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:10:50.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8016" for this suite.
Jan 27 15:10:56.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:10:56.400: INFO: namespace container-runtime-8016 deletion completed in 6.284857623s

• [SLOW TEST:15.616 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:10:56.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 27 15:10:56.555: INFO: Waiting up to 5m0s for pod "pod-b374dc5f-3665-4e21-bed7-1f6cf427f1d9" in namespace "emptydir-5652" to be "success or failure"
Jan 27 15:10:56.567: INFO: Pod "pod-b374dc5f-3665-4e21-bed7-1f6cf427f1d9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.559142ms
Jan 27 15:10:58.578: INFO: Pod "pod-b374dc5f-3665-4e21-bed7-1f6cf427f1d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023391872s
Jan 27 15:11:00.584: INFO: Pod "pod-b374dc5f-3665-4e21-bed7-1f6cf427f1d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028883875s
Jan 27 15:11:02.601: INFO: Pod "pod-b374dc5f-3665-4e21-bed7-1f6cf427f1d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046206715s
Jan 27 15:11:04.622: INFO: Pod "pod-b374dc5f-3665-4e21-bed7-1f6cf427f1d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067068876s
Jan 27 15:11:06.637: INFO: Pod "pod-b374dc5f-3665-4e21-bed7-1f6cf427f1d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082329443s
STEP: Saw pod success
Jan 27 15:11:06.638: INFO: Pod "pod-b374dc5f-3665-4e21-bed7-1f6cf427f1d9" satisfied condition "success or failure"
Jan 27 15:11:06.646: INFO: Trying to get logs from node iruya-node pod pod-b374dc5f-3665-4e21-bed7-1f6cf427f1d9 container test-container: 
STEP: delete the pod
Jan 27 15:11:06.781: INFO: Waiting for pod pod-b374dc5f-3665-4e21-bed7-1f6cf427f1d9 to disappear
Jan 27 15:11:06.786: INFO: Pod pod-b374dc5f-3665-4e21-bed7-1f6cf427f1d9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:11:06.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5652" for this suite.
Jan 27 15:11:12.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:11:13.014: INFO: namespace emptydir-5652 deletion completed in 6.220968739s

• [SLOW TEST:16.612 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:11:13.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 27 15:11:13.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:11:23.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7451" for this suite.
Jan 27 15:12:09.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:12:09.388: INFO: namespace pods-7451 deletion completed in 46.168654418s

• [SLOW TEST:56.374 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:12:09.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 27 15:12:20.179: INFO: Successfully updated pod "labelsupdate35633d3d-966f-4942-b6c1-54a6232e7e0e"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:12:22.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1615" for this suite.
Jan 27 15:12:44.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:12:44.419: INFO: namespace downward-api-1615 deletion completed in 22.185327516s

• [SLOW TEST:35.030 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:12:44.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-a58d1aff-e10d-4408-a7b6-f0e83db1609a
STEP: Creating a pod to test consume configMaps
Jan 27 15:12:44.576: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-11e30031-da27-40d1-a918-14f707abd670" in namespace "projected-3749" to be "success or failure"
Jan 27 15:12:44.580: INFO: Pod "pod-projected-configmaps-11e30031-da27-40d1-a918-14f707abd670": Phase="Pending", Reason="", readiness=false. Elapsed: 3.982037ms
Jan 27 15:12:46.592: INFO: Pod "pod-projected-configmaps-11e30031-da27-40d1-a918-14f707abd670": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015888791s
Jan 27 15:12:48.601: INFO: Pod "pod-projected-configmaps-11e30031-da27-40d1-a918-14f707abd670": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025558502s
Jan 27 15:12:50.618: INFO: Pod "pod-projected-configmaps-11e30031-da27-40d1-a918-14f707abd670": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041887728s
Jan 27 15:12:52.634: INFO: Pod "pod-projected-configmaps-11e30031-da27-40d1-a918-14f707abd670": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058521091s
Jan 27 15:12:54.643: INFO: Pod "pod-projected-configmaps-11e30031-da27-40d1-a918-14f707abd670": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066796298s
STEP: Saw pod success
Jan 27 15:12:54.643: INFO: Pod "pod-projected-configmaps-11e30031-da27-40d1-a918-14f707abd670" satisfied condition "success or failure"
Jan 27 15:12:54.651: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-11e30031-da27-40d1-a918-14f707abd670 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 27 15:12:54.696: INFO: Waiting for pod pod-projected-configmaps-11e30031-da27-40d1-a918-14f707abd670 to disappear
Jan 27 15:12:54.700: INFO: Pod pod-projected-configmaps-11e30031-da27-40d1-a918-14f707abd670 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:12:54.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3749" for this suite.
Jan 27 15:13:00.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:13:00.847: INFO: namespace projected-3749 deletion completed in 6.142518409s

• [SLOW TEST:16.428 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:13:00.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 27 15:13:01.001: INFO: Waiting up to 5m0s for pod "pod-9e940bb5-46e9-42c7-8988-41d31004e251" in namespace "emptydir-3310" to be "success or failure"
Jan 27 15:13:01.031: INFO: Pod "pod-9e940bb5-46e9-42c7-8988-41d31004e251": Phase="Pending", Reason="", readiness=false. Elapsed: 30.233683ms
Jan 27 15:13:03.038: INFO: Pod "pod-9e940bb5-46e9-42c7-8988-41d31004e251": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036842879s
Jan 27 15:13:05.049: INFO: Pod "pod-9e940bb5-46e9-42c7-8988-41d31004e251": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048328094s
Jan 27 15:13:07.058: INFO: Pod "pod-9e940bb5-46e9-42c7-8988-41d31004e251": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056600136s
Jan 27 15:13:09.069: INFO: Pod "pod-9e940bb5-46e9-42c7-8988-41d31004e251": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068253306s
Jan 27 15:13:11.079: INFO: Pod "pod-9e940bb5-46e9-42c7-8988-41d31004e251": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077721953s
STEP: Saw pod success
Jan 27 15:13:11.079: INFO: Pod "pod-9e940bb5-46e9-42c7-8988-41d31004e251" satisfied condition "success or failure"
Jan 27 15:13:11.087: INFO: Trying to get logs from node iruya-node pod pod-9e940bb5-46e9-42c7-8988-41d31004e251 container test-container: 
STEP: delete the pod
Jan 27 15:13:11.169: INFO: Waiting for pod pod-9e940bb5-46e9-42c7-8988-41d31004e251 to disappear
Jan 27 15:13:11.185: INFO: Pod pod-9e940bb5-46e9-42c7-8988-41d31004e251 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:13:11.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3310" for this suite.
Jan 27 15:13:17.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:13:17.343: INFO: namespace emptydir-3310 deletion completed in 6.148850364s

• [SLOW TEST:16.495 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:13:17.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-02bb4477-5d2d-4577-870d-d1e4049da931
STEP: Creating a pod to test consume configMaps
Jan 27 15:13:17.449: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-881d29f1-92a4-41f8-8fa9-da8ab96a3e73" in namespace "projected-8281" to be "success or failure"
Jan 27 15:13:17.454: INFO: Pod "pod-projected-configmaps-881d29f1-92a4-41f8-8fa9-da8ab96a3e73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.787349ms
Jan 27 15:13:19.464: INFO: Pod "pod-projected-configmaps-881d29f1-92a4-41f8-8fa9-da8ab96a3e73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014383508s
Jan 27 15:13:21.471: INFO: Pod "pod-projected-configmaps-881d29f1-92a4-41f8-8fa9-da8ab96a3e73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021833707s
Jan 27 15:13:23.479: INFO: Pod "pod-projected-configmaps-881d29f1-92a4-41f8-8fa9-da8ab96a3e73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029709941s
Jan 27 15:13:25.488: INFO: Pod "pod-projected-configmaps-881d29f1-92a4-41f8-8fa9-da8ab96a3e73": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038391692s
Jan 27 15:13:27.495: INFO: Pod "pod-projected-configmaps-881d29f1-92a4-41f8-8fa9-da8ab96a3e73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.045638413s
STEP: Saw pod success
Jan 27 15:13:27.495: INFO: Pod "pod-projected-configmaps-881d29f1-92a4-41f8-8fa9-da8ab96a3e73" satisfied condition "success or failure"
Jan 27 15:13:27.499: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-881d29f1-92a4-41f8-8fa9-da8ab96a3e73 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 27 15:13:27.737: INFO: Waiting for pod pod-projected-configmaps-881d29f1-92a4-41f8-8fa9-da8ab96a3e73 to disappear
Jan 27 15:13:27.751: INFO: Pod pod-projected-configmaps-881d29f1-92a4-41f8-8fa9-da8ab96a3e73 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:13:27.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8281" for this suite.
Jan 27 15:13:33.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:13:33.925: INFO: namespace projected-8281 deletion completed in 6.164101656s

• [SLOW TEST:16.581 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:13:33.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-01e03378-383e-4e99-a8e3-488d850feb14
STEP: Creating a pod to test consume configMaps
Jan 27 15:13:34.062: INFO: Waiting up to 5m0s for pod "pod-configmaps-33df07e1-76ca-42eb-97a6-429a549f4311" in namespace "configmap-4142" to be "success or failure"
Jan 27 15:13:34.080: INFO: Pod "pod-configmaps-33df07e1-76ca-42eb-97a6-429a549f4311": Phase="Pending", Reason="", readiness=false. Elapsed: 17.209798ms
Jan 27 15:13:36.089: INFO: Pod "pod-configmaps-33df07e1-76ca-42eb-97a6-429a549f4311": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026238293s
Jan 27 15:13:38.097: INFO: Pod "pod-configmaps-33df07e1-76ca-42eb-97a6-429a549f4311": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034216161s
Jan 27 15:13:40.106: INFO: Pod "pod-configmaps-33df07e1-76ca-42eb-97a6-429a549f4311": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044058079s
Jan 27 15:13:42.175: INFO: Pod "pod-configmaps-33df07e1-76ca-42eb-97a6-429a549f4311": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112721925s
Jan 27 15:13:44.186: INFO: Pod "pod-configmaps-33df07e1-76ca-42eb-97a6-429a549f4311": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.123741005s
STEP: Saw pod success
Jan 27 15:13:44.186: INFO: Pod "pod-configmaps-33df07e1-76ca-42eb-97a6-429a549f4311" satisfied condition "success or failure"
Jan 27 15:13:44.190: INFO: Trying to get logs from node iruya-node pod pod-configmaps-33df07e1-76ca-42eb-97a6-429a549f4311 container configmap-volume-test: 
STEP: delete the pod
Jan 27 15:13:44.259: INFO: Waiting for pod pod-configmaps-33df07e1-76ca-42eb-97a6-429a549f4311 to disappear
Jan 27 15:13:44.269: INFO: Pod pod-configmaps-33df07e1-76ca-42eb-97a6-429a549f4311 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:13:44.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4142" for this suite.
Jan 27 15:13:50.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:13:50.510: INFO: namespace configmap-4142 deletion completed in 6.234412582s

• [SLOW TEST:16.585 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:13:50.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 27 15:13:50.644: INFO: PodSpec: initContainers in spec.initContainers
Jan 27 15:14:59.217: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c6101024-e053-41e5-be50-7887dab04b70", GenerateName:"", Namespace:"init-container-6681", SelfLink:"/api/v1/namespaces/init-container-6681/pods/pod-init-c6101024-e053-41e5-be50-7887dab04b70", UID:"62ffcfef-8537-40ed-8157-600e6fb3643c", ResourceVersion:"22079712", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715734830, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"644445519"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2bbl8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002c8a900), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2bbl8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2bbl8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2bbl8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002eb9718), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0030b7620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002eb9920)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002eb9940)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002eb9948), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002eb994c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715734830, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715734830, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715734830, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715734830, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc00322af40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00209a700)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00209a770)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://bd7dd84865f86fee7892104b90a5affe341a8645692b28c273d60b588c8ae783"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00322af80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00322af60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:14:59.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6681" for this suite.
Jan 27 15:15:21.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:15:21.376: INFO: namespace init-container-6681 deletion completed in 22.125468869s

• [SLOW TEST:90.863 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:15:21.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 27 15:15:21.451: INFO: Waiting up to 5m0s for pod "downward-api-f533f53f-7dc0-40cc-8e23-7286301ec08f" in namespace "downward-api-3802" to be "success or failure"
Jan 27 15:15:21.471: INFO: Pod "downward-api-f533f53f-7dc0-40cc-8e23-7286301ec08f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.768781ms
Jan 27 15:15:23.483: INFO: Pod "downward-api-f533f53f-7dc0-40cc-8e23-7286301ec08f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032236977s
Jan 27 15:15:25.497: INFO: Pod "downward-api-f533f53f-7dc0-40cc-8e23-7286301ec08f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0463153s
Jan 27 15:15:27.506: INFO: Pod "downward-api-f533f53f-7dc0-40cc-8e23-7286301ec08f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054654162s
Jan 27 15:15:29.516: INFO: Pod "downward-api-f533f53f-7dc0-40cc-8e23-7286301ec08f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064864584s
Jan 27 15:15:31.522: INFO: Pod "downward-api-f533f53f-7dc0-40cc-8e23-7286301ec08f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071386408s
STEP: Saw pod success
Jan 27 15:15:31.522: INFO: Pod "downward-api-f533f53f-7dc0-40cc-8e23-7286301ec08f" satisfied condition "success or failure"
Jan 27 15:15:31.526: INFO: Trying to get logs from node iruya-node pod downward-api-f533f53f-7dc0-40cc-8e23-7286301ec08f container dapi-container: 
STEP: delete the pod
Jan 27 15:15:31.573: INFO: Waiting for pod downward-api-f533f53f-7dc0-40cc-8e23-7286301ec08f to disappear
Jan 27 15:15:31.583: INFO: Pod downward-api-f533f53f-7dc0-40cc-8e23-7286301ec08f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:15:31.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3802" for this suite.
Jan 27 15:15:37.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:15:37.747: INFO: namespace downward-api-3802 deletion completed in 6.157686796s

• [SLOW TEST:16.370 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:15:37.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Jan 27 15:15:37.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-691 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 27 15:15:49.981: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0127 15:15:48.842942    3707 log.go:172] (0xc0008c8000) (0xc0008cc000) Create stream\nI0127 15:15:48.843650    3707 log.go:172] (0xc0008c8000) (0xc0008cc000) Stream added, broadcasting: 1\nI0127 15:15:48.881149    3707 log.go:172] (0xc0008c8000) Reply frame received for 1\nI0127 15:15:48.881267    3707 log.go:172] (0xc0008c8000) (0xc000686140) Create stream\nI0127 15:15:48.881284    3707 log.go:172] (0xc0008c8000) (0xc000686140) Stream added, broadcasting: 3\nI0127 15:15:48.883450    3707 log.go:172] (0xc0008c8000) Reply frame received for 3\nI0127 15:15:48.883482    3707 log.go:172] (0xc0008c8000) (0xc0008cc0a0) Create stream\nI0127 15:15:48.883491    3707 log.go:172] (0xc0008c8000) (0xc0008cc0a0) Stream added, broadcasting: 5\nI0127 15:15:48.886035    3707 log.go:172] (0xc0008c8000) Reply frame received for 5\nI0127 15:15:48.886232    3707 log.go:172] (0xc0008c8000) (0xc000186000) Create stream\nI0127 15:15:48.886265    3707 log.go:172] (0xc0008c8000) (0xc000186000) Stream added, broadcasting: 7\nI0127 15:15:48.888403    3707 log.go:172] (0xc0008c8000) Reply frame received for 7\nI0127 15:15:48.888854    3707 log.go:172] (0xc000686140) (3) Writing data frame\nI0127 15:15:48.889164    3707 log.go:172] (0xc000686140) (3) Writing data frame\nI0127 15:15:48.901330    3707 log.go:172] (0xc0008c8000) Data frame received for 5\nI0127 15:15:48.901397    3707 log.go:172] (0xc0008cc0a0) (5) Data frame handling\nI0127 15:15:48.901442    3707 log.go:172] (0xc0008cc0a0) (5) Data frame sent\nI0127 15:15:48.905226    3707 log.go:172] (0xc0008c8000) Data frame received for 5\nI0127 15:15:48.905294    3707 log.go:172] (0xc0008cc0a0) (5) Data frame handling\nI0127 15:15:48.905324    3707 log.go:172] (0xc0008cc0a0) (5) Data frame sent\nI0127 15:15:49.943346    3707 log.go:172] (0xc0008c8000) Data frame received for 1\nI0127 15:15:49.943470    3707 log.go:172] (0xc0008c8000) (0xc000686140) Stream removed, broadcasting: 3\nI0127 15:15:49.943583    3707 log.go:172] (0xc0008cc000) (1) Data frame handling\nI0127 15:15:49.943606    3707 log.go:172] (0xc0008cc000) (1) Data frame sent\nI0127 15:15:49.943694    3707 log.go:172] (0xc0008c8000) (0xc0008cc0a0) Stream removed, broadcasting: 5\nI0127 15:15:49.944042    3707 log.go:172] (0xc0008c8000) (0xc000186000) Stream removed, broadcasting: 7\nI0127 15:15:49.944267    3707 log.go:172] (0xc0008c8000) (0xc0008cc000) Stream removed, broadcasting: 1\nI0127 15:15:49.944445    3707 log.go:172] (0xc0008c8000) Go away received\nI0127 15:15:49.944862    3707 log.go:172] (0xc0008c8000) (0xc0008cc000) Stream removed, broadcasting: 1\nI0127 15:15:49.944888    3707 log.go:172] (0xc0008c8000) (0xc000686140) Stream removed, broadcasting: 3\nI0127 15:15:49.944898    3707 log.go:172] (0xc0008c8000) (0xc0008cc0a0) Stream removed, broadcasting: 5\nI0127 15:15:49.944906    3707 log.go:172] (0xc0008c8000) (0xc000186000) Stream removed, broadcasting: 7\n"
Jan 27 15:15:49.981: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:15:51.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-691" for this suite.
Jan 27 15:15:58.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:15:58.182: INFO: namespace kubectl-691 deletion completed in 6.175382768s

• [SLOW TEST:20.434 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:15:58.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 27 15:16:09.009: INFO: Successfully updated pod "annotationupdate7f509805-e43e-489c-b1b2-d631d25a99e1"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:16:11.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5394" for this suite.
Jan 27 15:16:35.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:16:35.398: INFO: namespace projected-5394 deletion completed in 24.230872335s

• [SLOW TEST:37.216 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:16:35.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-9a3e01e5-4da6-4738-a8f6-ad023e161b8d
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:16:35.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2252" for this suite.
Jan 27 15:16:41.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:16:41.727: INFO: namespace secrets-2252 deletion completed in 6.166781219s

• [SLOW TEST:6.329 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:16:41.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-44ab7df9-3036-4419-85b5-d1593631e74a
STEP: Creating a pod to test consume secrets
Jan 27 15:16:41.974: INFO: Waiting up to 5m0s for pod "pod-secrets-9c1daf6a-df29-4d9d-b665-d10e5658223e" in namespace "secrets-5739" to be "success or failure"
Jan 27 15:16:42.067: INFO: Pod "pod-secrets-9c1daf6a-df29-4d9d-b665-d10e5658223e": Phase="Pending", Reason="", readiness=false. Elapsed: 93.727863ms
Jan 27 15:16:44.075: INFO: Pod "pod-secrets-9c1daf6a-df29-4d9d-b665-d10e5658223e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101911043s
Jan 27 15:16:46.084: INFO: Pod "pod-secrets-9c1daf6a-df29-4d9d-b665-d10e5658223e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110069852s
Jan 27 15:16:48.092: INFO: Pod "pod-secrets-9c1daf6a-df29-4d9d-b665-d10e5658223e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118020752s
Jan 27 15:16:50.102: INFO: Pod "pod-secrets-9c1daf6a-df29-4d9d-b665-d10e5658223e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128018171s
Jan 27 15:16:52.111: INFO: Pod "pod-secrets-9c1daf6a-df29-4d9d-b665-d10e5658223e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.137862926s
STEP: Saw pod success
Jan 27 15:16:52.112: INFO: Pod "pod-secrets-9c1daf6a-df29-4d9d-b665-d10e5658223e" satisfied condition "success or failure"
Jan 27 15:16:52.116: INFO: Trying to get logs from node iruya-node pod pod-secrets-9c1daf6a-df29-4d9d-b665-d10e5658223e container secret-volume-test: 
STEP: delete the pod
Jan 27 15:16:52.191: INFO: Waiting for pod pod-secrets-9c1daf6a-df29-4d9d-b665-d10e5658223e to disappear
Jan 27 15:16:52.197: INFO: Pod pod-secrets-9c1daf6a-df29-4d9d-b665-d10e5658223e no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:16:52.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5739" for this suite.
Jan 27 15:16:58.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:16:58.393: INFO: namespace secrets-5739 deletion completed in 6.18724529s

• [SLOW TEST:16.665 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:16:58.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 27 15:16:58.543: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1418ad4e-b0b2-4e0d-8688-e4eb36a09885" in namespace "downward-api-6539" to be "success or failure"
Jan 27 15:16:58.552: INFO: Pod "downwardapi-volume-1418ad4e-b0b2-4e0d-8688-e4eb36a09885": Phase="Pending", Reason="", readiness=false. Elapsed: 8.880822ms
Jan 27 15:17:00.569: INFO: Pod "downwardapi-volume-1418ad4e-b0b2-4e0d-8688-e4eb36a09885": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026123341s
Jan 27 15:17:02.582: INFO: Pod "downwardapi-volume-1418ad4e-b0b2-4e0d-8688-e4eb36a09885": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039167251s
Jan 27 15:17:04.589: INFO: Pod "downwardapi-volume-1418ad4e-b0b2-4e0d-8688-e4eb36a09885": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046728468s
Jan 27 15:17:06.603: INFO: Pod "downwardapi-volume-1418ad4e-b0b2-4e0d-8688-e4eb36a09885": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060191216s
Jan 27 15:17:08.611: INFO: Pod "downwardapi-volume-1418ad4e-b0b2-4e0d-8688-e4eb36a09885": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068179347s
STEP: Saw pod success
Jan 27 15:17:08.611: INFO: Pod "downwardapi-volume-1418ad4e-b0b2-4e0d-8688-e4eb36a09885" satisfied condition "success or failure"
Jan 27 15:17:08.615: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1418ad4e-b0b2-4e0d-8688-e4eb36a09885 container client-container: 
STEP: delete the pod
Jan 27 15:17:08.746: INFO: Waiting for pod downwardapi-volume-1418ad4e-b0b2-4e0d-8688-e4eb36a09885 to disappear
Jan 27 15:17:08.754: INFO: Pod downwardapi-volume-1418ad4e-b0b2-4e0d-8688-e4eb36a09885 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:17:08.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6539" for this suite.
Jan 27 15:17:14.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:17:14.907: INFO: namespace downward-api-6539 deletion completed in 6.143940119s

• [SLOW TEST:16.513 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:17:14.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-5105a248-c95e-4ff2-a2d1-3e4a0d6d14e8
STEP: Creating configMap with name cm-test-opt-upd-34f97656-82d3-4913-87f1-d88027342dca
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-5105a248-c95e-4ff2-a2d1-3e4a0d6d14e8
STEP: Updating configmap cm-test-opt-upd-34f97656-82d3-4913-87f1-d88027342dca
STEP: Creating configMap with name cm-test-opt-create-89012962-40f9-4565-a0aa-db6b292cd9f6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:17:31.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4280" for this suite.
Jan 27 15:17:53.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:17:53.827: INFO: namespace configmap-4280 deletion completed in 22.190682399s

• [SLOW TEST:38.920 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:17:53.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3500
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 27 15:17:53.931: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 27 15:18:36.491: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3500 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 15:18:36.491: INFO: >>> kubeConfig: /root/.kube/config
I0127 15:18:36.580449       8 log.go:172] (0xc000efe580) (0xc0021a6000) Create stream
I0127 15:18:36.580638       8 log.go:172] (0xc000efe580) (0xc0021a6000) Stream added, broadcasting: 1
I0127 15:18:36.592255       8 log.go:172] (0xc000efe580) Reply frame received for 1
I0127 15:18:36.592302       8 log.go:172] (0xc000efe580) (0xc002434fa0) Create stream
I0127 15:18:36.592316       8 log.go:172] (0xc000efe580) (0xc002434fa0) Stream added, broadcasting: 3
I0127 15:18:36.594464       8 log.go:172] (0xc000efe580) Reply frame received for 3
I0127 15:18:36.594495       8 log.go:172] (0xc000efe580) (0xc002435040) Create stream
I0127 15:18:36.594508       8 log.go:172] (0xc000efe580) (0xc002435040) Stream added, broadcasting: 5
I0127 15:18:36.604093       8 log.go:172] (0xc000efe580) Reply frame received for 5
I0127 15:18:36.773881       8 log.go:172] (0xc000efe580) Data frame received for 3
I0127 15:18:36.773998       8 log.go:172] (0xc002434fa0) (3) Data frame handling
I0127 15:18:36.774047       8 log.go:172] (0xc002434fa0) (3) Data frame sent
I0127 15:18:36.977588       8 log.go:172] (0xc000efe580) (0xc002434fa0) Stream removed, broadcasting: 3
I0127 15:18:36.977755       8 log.go:172] (0xc000efe580) Data frame received for 1
I0127 15:18:36.977772       8 log.go:172] (0xc0021a6000) (1) Data frame handling
I0127 15:18:36.977805       8 log.go:172] (0xc0021a6000) (1) Data frame sent
I0127 15:18:36.977883       8 log.go:172] (0xc000efe580) (0xc0021a6000) Stream removed, broadcasting: 1
I0127 15:18:36.978142       8 log.go:172] (0xc000efe580) (0xc002435040) Stream removed, broadcasting: 5
I0127 15:18:36.978168       8 log.go:172] (0xc000efe580) Go away received
I0127 15:18:36.978215       8 log.go:172] (0xc000efe580) (0xc0021a6000) Stream removed, broadcasting: 1
I0127 15:18:36.978243       8 log.go:172] (0xc000efe580) (0xc002434fa0) Stream removed, broadcasting: 3
I0127 15:18:36.978264       8 log.go:172] (0xc000efe580) (0xc002435040) Stream removed, broadcasting: 5
Jan 27 15:18:36.978: INFO: Found all expected endpoints: [netserver-0]
Jan 27 15:18:36.989: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3500 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 15:18:36.989: INFO: >>> kubeConfig: /root/.kube/config
I0127 15:18:37.061124       8 log.go:172] (0xc000e964d0) (0xc000f4d720) Create stream
I0127 15:18:37.061246       8 log.go:172] (0xc000e964d0) (0xc000f4d720) Stream added, broadcasting: 1
I0127 15:18:37.071667       8 log.go:172] (0xc000e964d0) Reply frame received for 1
I0127 15:18:37.071815       8 log.go:172] (0xc000e964d0) (0xc000ac4b40) Create stream
I0127 15:18:37.071830       8 log.go:172] (0xc000e964d0) (0xc000ac4b40) Stream added, broadcasting: 3
I0127 15:18:37.075157       8 log.go:172] (0xc000e964d0) Reply frame received for 3
I0127 15:18:37.075193       8 log.go:172] (0xc000e964d0) (0xc001f93860) Create stream
I0127 15:18:37.075205       8 log.go:172] (0xc000e964d0) (0xc001f93860) Stream added, broadcasting: 5
I0127 15:18:37.076542       8 log.go:172] (0xc000e964d0) Reply frame received for 5
I0127 15:18:37.237115       8 log.go:172] (0xc000e964d0) Data frame received for 3
I0127 15:18:37.237168       8 log.go:172] (0xc000ac4b40) (3) Data frame handling
I0127 15:18:37.237182       8 log.go:172] (0xc000ac4b40) (3) Data frame sent
I0127 15:18:37.406928       8 log.go:172] (0xc000e964d0) (0xc000ac4b40) Stream removed, broadcasting: 3
I0127 15:18:37.407158       8 log.go:172] (0xc000e964d0) Data frame received for 1
I0127 15:18:37.407201       8 log.go:172] (0xc000e964d0) (0xc001f93860) Stream removed, broadcasting: 5
I0127 15:18:37.407240       8 log.go:172] (0xc000f4d720) (1) Data frame handling
I0127 15:18:37.407282       8 log.go:172] (0xc000f4d720) (1) Data frame sent
I0127 15:18:37.407296       8 log.go:172] (0xc000e964d0) (0xc000f4d720) Stream removed, broadcasting: 1
I0127 15:18:37.407308       8 log.go:172] (0xc000e964d0) Go away received
I0127 15:18:37.407837       8 log.go:172] (0xc000e964d0) (0xc000f4d720) Stream removed, broadcasting: 1
I0127 15:18:37.407849       8 log.go:172] (0xc000e964d0) (0xc000ac4b40) Stream removed, broadcasting: 3
I0127 15:18:37.407853       8 log.go:172] (0xc000e964d0) (0xc001f93860) Stream removed, broadcasting: 5
Jan 27 15:18:37.407: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:18:37.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3500" for this suite.
Jan 27 15:19:01.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:19:01.678: INFO: namespace pod-network-test-3500 deletion completed in 24.262316799s

• [SLOW TEST:67.851 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 27 15:19:01.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 27 15:19:01.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5532'
Jan 27 15:19:01.877: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 27 15:19:01.877: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jan 27 15:19:03.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5532'
Jan 27 15:19:04.096: INFO: stderr: ""
Jan 27 15:19:04.096: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 27 15:19:04.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5532" for this suite.
Jan 27 15:19:10.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 15:19:10.333: INFO: namespace kubectl-5532 deletion completed in 6.229862182s

• [SLOW TEST:8.655 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSJan 27 15:19:10.334: INFO: Running AfterSuite actions on all nodes
Jan 27 15:19:10.334: INFO: Running AfterSuite actions on node 1
Jan 27 15:19:10.334: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8582.309 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS