I0317 10:46:43.506472 6 e2e.go:224] Starting e2e run "97158376-683c-11ea-b08f-0242ac11000f" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584442002 - Will randomize all specs Will run 201 of 2164 specs Mar 17 10:46:43.713: INFO: >>> kubeConfig: /root/.kube/config Mar 17 10:46:43.717: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 17 10:46:43.734: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 17 10:46:43.765: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 17 10:46:43.765: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 17 10:46:43.765: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 17 10:46:43.777: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 17 10:46:43.777: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 17 10:46:43.777: INFO: e2e test version: v1.13.12 Mar 17 10:46:43.778: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:46:43.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency Mar 17 10:46:43.878: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-j5l46 I0317 10:46:43.884030 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-j5l46, replica count: 1 I0317 10:46:44.934490 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0317 10:46:45.934694 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0317 10:46:46.934939 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0317 10:46:47.935171 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 17 10:46:48.073: INFO: Created: latency-svc-f4hsl Mar 17 10:46:48.137: INFO: Got endpoints: latency-svc-f4hsl [102.385891ms] Mar 17 10:46:48.185: INFO: Created: latency-svc-l8gkt Mar 17 10:46:48.196: INFO: Got endpoints: latency-svc-l8gkt [58.987253ms] Mar 17 10:46:48.215: INFO: Created: latency-svc-h5klk Mar 17 10:46:48.242: INFO: Got endpoints: latency-svc-h5klk [104.244846ms] Mar 17 10:46:48.257: INFO: Created: latency-svc-4flsw Mar 17 10:46:48.269: INFO: Got endpoints: latency-svc-4flsw [131.514742ms] Mar 17 10:46:48.295: INFO: Created: latency-svc-6flzd Mar 17 10:46:48.305: INFO: Got endpoints: latency-svc-6flzd [167.787914ms] Mar 17 10:46:48.324: INFO: Created: latency-svc-hhthf Mar 17 10:46:48.398: INFO: Got endpoints: latency-svc-hhthf [260.229545ms] Mar 17 10:46:48.413: INFO: Created: latency-svc-lx54r Mar 17 10:46:48.426: INFO: Got endpoints: latency-svc-lx54r [288.137237ms] Mar 17 10:46:48.455: INFO: Created: latency-svc-xhhz2 Mar 17 10:46:48.468: INFO: Got endpoints: latency-svc-xhhz2 [330.51817ms] Mar 17 10:46:48.485: INFO: Created: latency-svc-p6sn9 Mar 17 10:46:48.541: INFO: Got endpoints: latency-svc-p6sn9 [403.503663ms] Mar 17 10:46:48.544: INFO: Created: latency-svc-zw2dd Mar 17 10:46:48.553: INFO: Got endpoints: latency-svc-zw2dd [415.444571ms] Mar 17 10:46:48.577: INFO: Created: latency-svc-xkh7b Mar 17 10:46:48.589: INFO: Got endpoints: latency-svc-xkh7b [451.709808ms] Mar 17 10:46:48.608: INFO: Created: latency-svc-8dvkf Mar 17 10:46:48.619: INFO: Got endpoints: latency-svc-8dvkf [481.776617ms] Mar 17 10:46:48.636: INFO: Created: latency-svc-wnvmp Mar 17 10:46:48.667: INFO: Got endpoints: latency-svc-wnvmp [528.785768ms] Mar 17 10:46:48.683: INFO: Created: latency-svc-pkwhm Mar 17 10:46:48.698: INFO: Got endpoints: latency-svc-pkwhm [560.61211ms] Mar 17 10:46:48.719: INFO: Created: latency-svc-gp2hj Mar 17 10:46:48.737: INFO: Got endpoints: latency-svc-gp2hj [598.740696ms] Mar 17 10:46:48.767: INFO: Created: latency-svc-vg6b8 Mar 17 10:46:48.810: INFO: Got endpoints: latency-svc-vg6b8 [672.64314ms] Mar 17 10:46:48.812: INFO: Created: latency-svc-9cmnt Mar 17 10:46:48.840: INFO: Got endpoints: latency-svc-9cmnt [643.711761ms] Mar 17 10:46:48.878: INFO: Created: latency-svc-bvrlt Mar 17 10:46:48.933: INFO: Got endpoints: latency-svc-bvrlt [691.24307ms] Mar 17 10:46:48.941: INFO: Created: latency-svc-lkgh6 Mar 17 10:46:48.970: INFO: Got endpoints: latency-svc-lkgh6 [700.374494ms] Mar 17 10:46:49.001: INFO: Created: latency-svc-n2spn Mar 17 10:46:49.018: INFO: Got endpoints: latency-svc-n2spn [712.13544ms] Mar 17 10:46:49.279: INFO: Created: latency-svc-k8lc4 Mar 17 10:46:49.282: INFO: Got endpoints: latency-svc-k8lc4 [884.182915ms] Mar 17 10:46:49.308: INFO: Created: latency-svc-8l4fv Mar 17 10:46:49.336: INFO: Got endpoints: latency-svc-8l4fv [910.386638ms] Mar 17 10:46:49.634: INFO: Created: latency-svc-542zr Mar 17 10:46:49.642: INFO: Got endpoints: latency-svc-542zr [1.173276932s] Mar 17 10:46:49.660: INFO: Created: latency-svc-sxq7d Mar 17 10:46:49.672: INFO: Got endpoints: latency-svc-sxq7d [1.130799461s] Mar 17 10:46:49.691: INFO: Created: latency-svc-6m72x Mar 17 10:46:49.708: INFO: Got endpoints: latency-svc-6m72x [1.154786312s] Mar 17 10:46:49.727: INFO: Created: latency-svc-npl9d Mar 17 10:46:49.768: INFO: Got endpoints: latency-svc-npl9d [1.178844899s] Mar 17 10:46:49.781: INFO: Created: latency-svc-94xhp Mar 17 10:46:49.792: INFO: Got endpoints: latency-svc-94xhp [1.172699744s] Mar 17 10:46:49.810: INFO: Created: latency-svc-qmhw7 Mar 17 10:46:49.823: INFO: Got endpoints: latency-svc-qmhw7 [1.156510876s] Mar 17 10:46:49.840: INFO: Created: latency-svc-lz59r Mar 17 10:46:49.854: INFO: Got endpoints: latency-svc-lz59r [1.155223278s] Mar 17 10:46:49.937: INFO: Created: latency-svc-zlpbt Mar 17 10:46:49.941: INFO: Got endpoints: latency-svc-zlpbt [1.204277222s] Mar 17 10:46:49.991: INFO: Created: latency-svc-jd4f8 Mar 17 10:46:50.004: INFO: Got endpoints: latency-svc-jd4f8 [1.193751568s] Mar 17 10:46:50.021: INFO: Created: latency-svc-hwbd4 Mar 17 10:46:50.034: INFO: Got endpoints: latency-svc-hwbd4 [1.193805612s] Mar 17 10:46:50.105: INFO: Created: latency-svc-fld6v Mar 17 10:46:50.109: INFO: Got endpoints: latency-svc-fld6v [1.175820865s] Mar 17 10:46:50.134: INFO: Created: latency-svc-cgr7k Mar 17 10:46:50.154: INFO: Got endpoints: latency-svc-cgr7k [1.184525726s] Mar 17 10:46:50.189: INFO: Created: latency-svc-qtnc7 Mar 17 10:46:50.272: INFO: Got endpoints: latency-svc-qtnc7 [1.254415748s] Mar 17 10:46:50.278: INFO: Created: latency-svc-g8lws Mar 17 10:46:50.291: INFO: Got endpoints: latency-svc-g8lws [1.008570109s] Mar 17 10:46:50.325: INFO: Created: latency-svc-l5tdr Mar 17 10:46:50.336: INFO: Got endpoints: latency-svc-l5tdr [999.90683ms] Mar 17 10:46:50.356: INFO: Created: latency-svc-m5zk9 Mar 17 10:46:50.422: INFO: Got endpoints: latency-svc-m5zk9 [780.018957ms] Mar 17 10:46:50.424: INFO: Created: latency-svc-ch9m5 Mar 17 10:46:50.447: INFO: Got endpoints: latency-svc-ch9m5 [774.306353ms] Mar 17 10:46:50.477: INFO: Created: latency-svc-dzdxp Mar 17 10:46:50.498: INFO: Got endpoints: latency-svc-dzdxp [790.367413ms] Mar 17 10:46:50.519: INFO: Created: latency-svc-7vjjd Mar 17 10:46:50.601: INFO: Got endpoints: latency-svc-7vjjd [832.834274ms] Mar 17 10:46:50.604: INFO: Created: latency-svc-7bbpg Mar 17 10:46:50.607: INFO: Got endpoints: latency-svc-7bbpg [814.67366ms] Mar 17 10:46:50.650: INFO: Created: latency-svc-fs5xz Mar 17 10:46:50.661: INFO: Got endpoints: latency-svc-fs5xz [837.762498ms] Mar 17 10:46:50.681: INFO: Created: latency-svc-b9clx Mar 17 10:46:50.697: INFO: Got endpoints: latency-svc-b9clx [843.697398ms] Mar 17 10:46:50.746: INFO: Created: latency-svc-v78kv Mar 17 10:46:50.748: INFO: Got endpoints: latency-svc-v78kv [807.402289ms] Mar 17 10:46:50.771: INFO: Created: latency-svc-bh4w9 Mar 17 10:46:50.782: INFO: Got endpoints: latency-svc-bh4w9 [777.916427ms] Mar 17 10:46:50.801: INFO: Created: latency-svc-rdpdg Mar 17 10:46:50.812: INFO: Got endpoints: latency-svc-rdpdg [778.282715ms] Mar 17 10:46:50.833: INFO: Created: latency-svc-fc8gd Mar 17 10:46:50.843: INFO: Got endpoints: latency-svc-fc8gd [733.817386ms] Mar 17 10:46:50.895: INFO: Created: latency-svc-g4zrz Mar 17 10:46:50.903: INFO: Got endpoints: latency-svc-g4zrz [748.368598ms] Mar 17 10:46:50.920: INFO: Created: latency-svc-9shpz Mar 17 10:46:50.933: INFO: Got endpoints: latency-svc-9shpz [661.27952ms] Mar 17 10:46:50.963: INFO: Created: latency-svc-lkwdz Mar 17 10:46:50.988: INFO: Got endpoints: latency-svc-lkwdz [697.10455ms] Mar 17 10:46:51.051: INFO: Created: latency-svc-qcfvg Mar 17 10:46:51.053: INFO: Got endpoints: latency-svc-qcfvg [716.618025ms] Mar 17 10:46:51.081: INFO: Created: latency-svc-cfpdq Mar 17 10:46:51.102: INFO: Got endpoints: latency-svc-cfpdq [680.371374ms] Mar 17 10:46:51.129: INFO: Created: latency-svc-9qnt5 Mar 17 10:46:51.145: INFO: Got endpoints: latency-svc-9qnt5 [697.97267ms] Mar 17 10:46:51.196: INFO: Created: latency-svc-rfnnt Mar 17 10:46:51.204: INFO: Got endpoints: latency-svc-rfnnt [705.831123ms] Mar 17 10:46:51.221: INFO: Created: latency-svc-v4r4x Mar 17 10:46:51.235: INFO: Got endpoints: latency-svc-v4r4x [633.443289ms] Mar 17 10:46:51.257: INFO: Created: latency-svc-g87wx Mar 17 10:46:51.265: INFO: Got endpoints: latency-svc-g87wx [657.964806ms] Mar 17 10:46:51.287: INFO: Created: latency-svc-6t7qg Mar 17 10:46:51.350: INFO: Got endpoints: latency-svc-6t7qg [688.391251ms] Mar 17 10:46:51.351: INFO: Created: latency-svc-bh87h Mar 17 10:46:51.355: INFO: Got endpoints: latency-svc-bh87h [657.593267ms] Mar 17 10:46:51.405: INFO: Created: latency-svc-87htz Mar 17 10:46:51.422: INFO: Got endpoints: latency-svc-87htz [673.404747ms] Mar 17 10:46:51.443: INFO: Created: latency-svc-2lnvh Mar 17 10:46:51.505: INFO: Got endpoints: latency-svc-2lnvh [723.174261ms] Mar 17 10:46:51.507: INFO: Created: latency-svc-lsmz7 Mar 17 10:46:51.512: INFO: Got endpoints: latency-svc-lsmz7 [699.920322ms] Mar 17 10:46:51.534: INFO: Created: latency-svc-xk5gf Mar 17 10:46:51.543: INFO: Got endpoints: latency-svc-xk5gf [699.991737ms] Mar 17 10:46:51.562: INFO: Created: latency-svc-vg649 Mar 17 10:46:51.573: INFO: Got endpoints: latency-svc-vg649 [670.367991ms] Mar 17 10:46:51.592: INFO: Created: latency-svc-k4ksz Mar 17 10:46:51.604: INFO: Got endpoints: latency-svc-k4ksz [670.058124ms] Mar 17 10:46:51.656: INFO: Created: latency-svc-f4q99 Mar 17 10:46:51.659: INFO: Got endpoints: latency-svc-f4q99 [671.303506ms] Mar 17 10:46:51.700: INFO: Created: latency-svc-qkm6k Mar 17 10:46:51.712: INFO: Got endpoints: latency-svc-qkm6k [659.198321ms] Mar 17 10:46:51.748: INFO: Created: latency-svc-khqck Mar 17 10:46:51.792: INFO: Got endpoints: latency-svc-khqck [690.023205ms] Mar 17 10:46:51.809: INFO: Created: latency-svc-67x45 Mar 17 10:46:51.820: INFO: Got endpoints: latency-svc-67x45 [675.843406ms] Mar 17 10:46:51.856: INFO: Created: latency-svc-2pn2q Mar 17 10:46:51.869: INFO: Got endpoints: latency-svc-2pn2q [664.306394ms] Mar 17 10:46:51.885: INFO: Created: latency-svc-qqkll Mar 17 10:46:51.942: INFO: Got endpoints: latency-svc-qqkll [707.389678ms] Mar 17 10:46:51.945: INFO: Created: latency-svc-2qz4p Mar 17 10:46:51.964: INFO: Got endpoints: latency-svc-2qz4p [699.379272ms] Mar 17 10:46:51.966: INFO: Created: latency-svc-9qpqm Mar 17 10:46:51.988: INFO: Got endpoints: latency-svc-9qpqm [638.724771ms] Mar 17 10:46:52.025: INFO: Created: latency-svc-n8bjb Mar 17 10:46:52.038: INFO: Got endpoints: latency-svc-n8bjb [683.220736ms] Mar 17 10:46:52.100: INFO: Created: latency-svc-hj9q9 Mar 17 10:46:52.122: INFO: Got endpoints: latency-svc-hj9q9 [700.063302ms] Mar 17 10:46:52.149: INFO: Created: latency-svc-xl9f5 Mar 17 10:46:52.164: INFO: Got endpoints: latency-svc-xl9f5 [659.10136ms] Mar 17 10:46:52.236: INFO: Created: latency-svc-jdgbd Mar 17 10:46:52.240: INFO: Got endpoints: latency-svc-jdgbd [727.203896ms] Mar 17 10:46:52.276: INFO: Created: latency-svc-9tkwk Mar 17 10:46:52.297: INFO: Got endpoints: latency-svc-9tkwk [753.821523ms] Mar 17 10:46:52.317: INFO: Created: latency-svc-mzp8w Mar 17 10:46:52.334: INFO: Got endpoints: latency-svc-mzp8w [760.605583ms] Mar 17 10:46:52.386: INFO: Created: latency-svc-7xbw4 Mar 17 10:46:52.393: INFO: Got endpoints: latency-svc-7xbw4 [789.848641ms] Mar 17 10:46:52.413: INFO: Created: latency-svc-mdjtr Mar 17 10:46:52.424: INFO: Got endpoints: latency-svc-mdjtr [764.450991ms] Mar 17 10:46:52.444: INFO: Created: latency-svc-9jz68 Mar 17 10:46:52.474: INFO: Got endpoints: latency-svc-9jz68 [762.005408ms] Mar 17 10:46:52.578: INFO: Created: latency-svc-8jsvf Mar 17 10:46:52.586: INFO: Got endpoints: latency-svc-8jsvf [793.805322ms] Mar 17 10:46:52.623: INFO: Created: latency-svc-4xfjs Mar 17 10:46:52.649: INFO: Got endpoints: latency-svc-4xfjs [828.532136ms] Mar 17 10:46:52.745: INFO: Created: latency-svc-k9rdq Mar 17 10:46:52.748: INFO: Got endpoints: latency-svc-k9rdq [879.354705ms] Mar 17 10:46:52.773: INFO: Created: latency-svc-88xdq Mar 17 10:46:52.797: INFO: Got endpoints: latency-svc-88xdq [854.365665ms] Mar 17 10:46:52.845: INFO: Created: latency-svc-z74zj Mar 17 10:46:52.925: INFO: Got endpoints: latency-svc-z74zj [960.469179ms] Mar 17 10:46:52.926: INFO: Created: latency-svc-2k2k2 Mar 17 10:46:52.955: INFO: Got endpoints: latency-svc-2k2k2 [966.448623ms] Mar 17 10:46:52.995: INFO: Created: latency-svc-hpzhz Mar 17 10:46:53.050: INFO: Got endpoints: latency-svc-hpzhz [1.01190548s] Mar 17 10:46:53.067: INFO: Created: latency-svc-svrft Mar 17 10:46:53.098: INFO: Got endpoints: latency-svc-svrft [976.026771ms] Mar 17 10:46:53.123: INFO: Created: latency-svc-k6wrd Mar 17 10:46:53.146: INFO: Got endpoints: latency-svc-k6wrd [981.82235ms] Mar 17 10:46:53.194: INFO: Created: latency-svc-k7sx9 Mar 17 10:46:53.198: INFO: Got endpoints: latency-svc-k7sx9 [957.897052ms] Mar 17 10:46:53.223: INFO: Created: latency-svc-gwk2n Mar 17 10:46:53.236: INFO: Got endpoints: latency-svc-gwk2n [939.460708ms] Mar 17 10:46:53.253: INFO: Created: latency-svc-trt47 Mar 17 10:46:53.267: INFO: Got endpoints: latency-svc-trt47 [932.761702ms] Mar 17 10:46:53.284: INFO: Created: latency-svc-p5w9w Mar 17 10:46:53.331: INFO: Got endpoints: latency-svc-p5w9w [937.638777ms] Mar 17 10:46:53.338: INFO: Created: latency-svc-qqns8 Mar 17 10:46:53.352: INFO: Got endpoints: latency-svc-qqns8 [927.5575ms] Mar 17 10:46:53.369: INFO: Created: latency-svc-5hzgw Mar 17 10:46:53.382: INFO: Got endpoints: latency-svc-5hzgw [907.267337ms] Mar 17 10:46:53.403: INFO: Created: latency-svc-hgf5l Mar 17 10:46:53.418: INFO: Got endpoints: latency-svc-hgf5l [831.590931ms] Mar 17 10:46:53.482: INFO: Created: latency-svc-z2xfg Mar 17 10:46:53.484: INFO: Got endpoints: latency-svc-z2xfg [835.233449ms] Mar 17 10:46:53.511: INFO: Created: latency-svc-xw9v6 Mar 17 10:46:53.520: INFO: Got endpoints: latency-svc-xw9v6 [772.005708ms] Mar 17 10:46:53.542: INFO: Created: latency-svc-gq4mx Mar 17 10:46:53.563: INFO: Got endpoints: latency-svc-gq4mx [766.255244ms] Mar 17 10:46:53.625: INFO: Created: latency-svc-kg9fg Mar 17 10:46:53.628: INFO: Got endpoints: latency-svc-kg9fg [703.208066ms] Mar 17 10:46:53.656: INFO: Created: latency-svc-sqkcd Mar 17 10:46:53.671: INFO: Got endpoints: latency-svc-sqkcd [716.427023ms] Mar 17 10:46:53.703: INFO: Created: latency-svc-2gppg Mar 17 10:46:53.714: INFO: Got endpoints: latency-svc-2gppg [663.353893ms] Mar 17 10:46:53.781: INFO: Created: latency-svc-j9c56 Mar 17 10:46:53.785: INFO: Got endpoints: latency-svc-j9c56 [686.721498ms] Mar 17 10:46:53.811: INFO: Created: latency-svc-znq7j Mar 17 10:46:53.822: INFO: Got endpoints: latency-svc-znq7j [675.367343ms] Mar 17 10:46:53.843: INFO: Created: latency-svc-kclxx Mar 17 10:46:53.858: INFO: Got endpoints: latency-svc-kclxx [660.764143ms] Mar 17 10:46:53.878: INFO: Created: latency-svc-5g285 Mar 17 10:46:53.942: INFO: Got endpoints: latency-svc-5g285 [705.882963ms] Mar 17 10:46:53.944: INFO: Created: latency-svc-77d8b Mar 17 10:46:53.948: INFO: Got endpoints: latency-svc-77d8b [681.689882ms] Mar 17 10:46:53.966: INFO: Created: latency-svc-twvfx Mar 17 10:46:53.980: INFO: Got endpoints: latency-svc-twvfx [648.356876ms] Mar 17 10:46:54.033: INFO: Created: latency-svc-gk9r5 Mar 17 10:46:54.098: INFO: Got endpoints: latency-svc-gk9r5 [746.476915ms] Mar 17 10:46:54.100: INFO: Created: latency-svc-4vfpr Mar 17 10:46:54.106: INFO: Got endpoints: latency-svc-4vfpr [724.055628ms] Mar 17 10:46:54.124: INFO: Created: latency-svc-msmpd Mar 17 10:46:54.148: INFO: Got endpoints: latency-svc-msmpd [729.533201ms] Mar 17 10:46:54.172: INFO: Created: latency-svc-lkm4g Mar 17 10:46:54.194: INFO: Got endpoints: latency-svc-lkm4g [709.848784ms] Mar 17 10:46:54.254: INFO: Created: latency-svc-8x86s Mar 17 10:46:54.272: INFO: Got endpoints: latency-svc-8x86s [751.692859ms] Mar 17 10:46:54.310: INFO: Created: latency-svc-gtjwq Mar 17 10:46:54.333: INFO: Got endpoints: latency-svc-gtjwq [770.282787ms] Mar 17 10:46:54.416: INFO: Created: latency-svc-s7tsf Mar 17 10:46:54.425: INFO: Got endpoints: latency-svc-s7tsf [796.689192ms] Mar 17 10:46:54.446: INFO: Created: latency-svc-x5v9j Mar 17 10:46:54.462: INFO: Got endpoints: latency-svc-x5v9j [790.173073ms] Mar 17 10:46:54.482: INFO: Created: latency-svc-v69cq Mar 17 10:46:54.492: INFO: Got endpoints: latency-svc-v69cq [778.211645ms] Mar 17 10:46:54.512: INFO: Created: latency-svc-z7pbg Mar 17 10:46:54.577: INFO: Got endpoints: latency-svc-z7pbg [792.551186ms] Mar 17 10:46:54.580: INFO: Created: latency-svc-x5sq8 Mar 17 10:46:54.601: INFO: Got endpoints: latency-svc-x5sq8 [778.770054ms] Mar 17 10:46:54.622: INFO: Created: latency-svc-mtvq8 Mar 17 10:46:54.637: INFO: Got endpoints: latency-svc-mtvq8 [778.016178ms] Mar 17 10:46:54.660: INFO: Created: latency-svc-wwr8p Mar 17 10:46:54.673: INFO: Got endpoints: latency-svc-wwr8p [730.412694ms] Mar 17 10:46:54.727: INFO: Created: latency-svc-2f7v6 Mar 17 10:46:54.730: INFO: Got endpoints: latency-svc-2f7v6 [781.401513ms] Mar 17 10:46:54.752: INFO: Created: latency-svc-d9jmm Mar 17 10:46:54.763: INFO: Got endpoints: latency-svc-d9jmm [783.835152ms] Mar 17 10:46:54.782: INFO: Created: latency-svc-wqljb Mar 17 10:46:54.794: INFO: Got endpoints: latency-svc-wqljb [695.821065ms] Mar 17 10:46:54.812: INFO: Created: latency-svc-9fxlg Mar 17 10:46:54.824: INFO: Got endpoints: latency-svc-9fxlg [718.267812ms] Mar 17 10:46:54.883: INFO: Created: latency-svc-c8t52 Mar 17 10:46:54.886: INFO: Got endpoints: latency-svc-c8t52 [738.027075ms] Mar 17 10:46:54.916: INFO: Created: latency-svc-6xxk4 Mar 17 10:46:54.927: INFO: Got endpoints: latency-svc-6xxk4 [732.427394ms] Mar 17 10:46:54.962: INFO: Created: latency-svc-v45vw Mar 17 10:46:54.975: INFO: Got endpoints: latency-svc-v45vw [702.706326ms] Mar 17 10:46:55.023: INFO: Created: latency-svc-6h889 Mar 17 10:46:55.048: INFO: Got endpoints: latency-svc-6h889 [714.196691ms] Mar 17 10:46:55.076: INFO: Created: latency-svc-68rgb Mar 17 10:46:55.090: INFO: Got endpoints: latency-svc-68rgb [664.655738ms] Mar 17 10:46:55.158: INFO: Created: latency-svc-k6tf9 Mar 17 10:46:55.161: INFO: Got endpoints: latency-svc-k6tf9 [698.915146ms] Mar 17 10:46:55.186: INFO: Created: latency-svc-6psmt Mar 17 10:46:55.200: INFO: Got endpoints: latency-svc-6psmt [707.347653ms] Mar 17 10:46:55.216: INFO: Created: latency-svc-jwl9j Mar 17 10:46:55.229: INFO: Got endpoints: latency-svc-jwl9j [651.431171ms] Mar 17 10:46:55.250: INFO: Created: latency-svc-zwblt Mar 17 10:46:55.308: INFO: Got endpoints: latency-svc-zwblt [706.934683ms] Mar 17 10:46:55.310: INFO: Created: latency-svc-5dtgp Mar 17 10:46:55.318: INFO: Got endpoints: latency-svc-5dtgp [681.686242ms] Mar 17 10:46:55.353: INFO: Created: latency-svc-d4pcw Mar 17 10:46:55.367: INFO: Got endpoints: latency-svc-d4pcw [694.429122ms] Mar 17 10:46:55.395: INFO: Created: latency-svc-xfkrh Mar 17 10:46:55.470: INFO: Got endpoints: latency-svc-xfkrh [739.600131ms] Mar 17 10:46:55.472: INFO: Created: latency-svc-zjs7l Mar 17 10:46:55.475: INFO: Got endpoints: latency-svc-zjs7l [711.93575ms] Mar 17 10:46:55.496: INFO: Created: latency-svc-5f685 Mar 17 10:46:55.518: INFO: Got endpoints: latency-svc-5f685 [724.338527ms] Mar 17 10:46:55.544: INFO: Created: latency-svc-ptvrv Mar 17 10:46:55.554: INFO: Got endpoints: latency-svc-ptvrv [729.806846ms] Mar 17 10:46:55.602: INFO: Created: latency-svc-jbqxz Mar 17 10:46:55.604: INFO: Got endpoints: latency-svc-jbqxz [718.485931ms] Mar 17 10:46:55.629: INFO: Created: latency-svc-c56s5 Mar 17 10:46:55.645: INFO: Got endpoints: latency-svc-c56s5 [718.101376ms] Mar 17 10:46:55.665: INFO: Created: latency-svc-jqf5z Mar 17 10:46:55.681: INFO: Got endpoints: latency-svc-jqf5z [706.390144ms] Mar 17 10:46:55.775: INFO: Created: latency-svc-2j64p Mar 17 10:46:55.777: INFO: Created: latency-svc-vdnf6 Mar 17 10:46:55.783: INFO: Got endpoints: latency-svc-vdnf6 [693.688884ms] Mar 17 10:46:55.783: INFO: Got endpoints: latency-svc-2j64p [735.803202ms] Mar 17 10:46:55.802: INFO: Created: latency-svc-b4jmp Mar 17 10:46:55.814: INFO: Got endpoints: latency-svc-b4jmp [653.468868ms] Mar 17 10:46:55.838: INFO: Created: latency-svc-mkf9t Mar 17 10:46:55.850: INFO: Got endpoints: latency-svc-mkf9t [650.861308ms] Mar 17 10:46:55.869: INFO: Created: latency-svc-97mt9 Mar 17 10:46:55.924: INFO: Got endpoints: latency-svc-97mt9 [695.244128ms] Mar 17 10:46:55.927: INFO: Created: latency-svc-jq7k7 Mar 17 10:46:55.941: INFO: Got endpoints: latency-svc-jq7k7 [633.206747ms] Mar 17 10:46:55.976: INFO: Created: latency-svc-vw4mf Mar 17 10:46:55.989: INFO: Got endpoints: latency-svc-vw4mf [670.687316ms] Mar 17 10:46:56.086: INFO: Created: latency-svc-7gdb8 Mar 17 10:46:56.107: INFO: Got endpoints: latency-svc-7gdb8 [740.050905ms] Mar 17 10:46:56.108: INFO: Created: latency-svc-lvpnd Mar 17 10:46:56.122: INFO: Got endpoints: latency-svc-lvpnd [651.974954ms] Mar 17 10:46:56.145: INFO: Created: latency-svc-m55bd Mar 17 10:46:56.158: INFO: Got endpoints: latency-svc-m55bd [682.494913ms] Mar 17 10:46:56.230: INFO: Created: latency-svc-mdd72 Mar 17 10:46:56.245: INFO: Got endpoints: latency-svc-mdd72 [726.709966ms] Mar 17 10:46:56.276: INFO: Created: latency-svc-wmvzl Mar 17 10:46:56.284: INFO: Got endpoints: latency-svc-wmvzl [730.221352ms] Mar 17 10:46:56.312: INFO: Created: latency-svc-gcvv7 Mar 17 10:46:56.327: INFO: Got endpoints: latency-svc-gcvv7 [722.341958ms] Mar 17 10:46:56.392: INFO: Created: latency-svc-mxtkr Mar 17 10:46:56.394: INFO: Got endpoints: latency-svc-mxtkr [749.18764ms] Mar 17 10:46:56.415: INFO: Created: latency-svc-g897f Mar 17 10:46:56.429: INFO: Got endpoints: latency-svc-g897f [747.667011ms] Mar 17 10:46:56.451: INFO: Created: latency-svc-5t6qf Mar 17 10:46:56.466: INFO: Got endpoints: latency-svc-5t6qf [682.21303ms] Mar 17 10:46:56.487: INFO: Created: latency-svc-gm5tm Mar 17 10:46:56.529: INFO: Got endpoints: latency-svc-gm5tm [745.553703ms] Mar 17 10:46:56.551: INFO: Created: latency-svc-bw7fb Mar 17 10:46:56.568: INFO: Got endpoints: latency-svc-bw7fb [754.319882ms] Mar 17 10:46:56.587: INFO: Created: latency-svc-65jsl Mar 17 10:46:56.598: INFO: Got endpoints: latency-svc-65jsl [747.925104ms] Mar 17 10:46:56.625: INFO: Created: latency-svc-jxzr8 Mar 17 10:46:56.697: INFO: Got endpoints: latency-svc-jxzr8 [772.922364ms] Mar 17 10:46:56.699: INFO: Created: latency-svc-qv8v2 Mar 17 10:46:56.707: INFO: Got endpoints: latency-svc-qv8v2 [765.708744ms] Mar 17 10:46:56.728: INFO: Created: latency-svc-xljwm Mar 17 10:46:56.744: INFO: Got endpoints: latency-svc-xljwm [754.811592ms] Mar 17 10:46:56.763: INFO: Created: latency-svc-cjcw4 Mar 17 10:46:56.779: INFO: Got endpoints: latency-svc-cjcw4 [672.008687ms] Mar 17 10:46:56.829: INFO: Created: latency-svc-nfhf8 Mar 17 10:46:56.832: INFO: Got endpoints: latency-svc-nfhf8 [710.177352ms] Mar 17 10:46:56.857: INFO: Created: latency-svc-2ksg9 Mar 17 10:46:56.870: INFO: Got endpoints: latency-svc-2ksg9 [712.021419ms] Mar 17 10:46:56.888: INFO: Created: latency-svc-6xm8j Mar 17 10:46:56.900: INFO: Got endpoints: latency-svc-6xm8j [655.140675ms] Mar 17 10:46:56.925: INFO: Created: latency-svc-vs2jb Mar 17 10:46:56.991: INFO: Got endpoints: latency-svc-vs2jb [706.331623ms] Mar 17 10:46:57.026: INFO: Created: latency-svc-tbjfn Mar 17 10:46:57.049: INFO: Got endpoints: latency-svc-tbjfn [722.493252ms] Mar 17 10:46:57.073: INFO: Created: latency-svc-847gs Mar 17 10:46:57.087: INFO: Got endpoints: latency-svc-847gs [692.673789ms] Mar 17 10:46:57.128: INFO: Created: latency-svc-46znp Mar 17 10:46:57.131: INFO: Got endpoints: latency-svc-46znp [701.956376ms] Mar 17 10:46:57.158: INFO: Created: latency-svc-bxsnp Mar 17 10:46:57.172: INFO: Got endpoints: latency-svc-bxsnp [706.005926ms] Mar 17 10:46:57.188: INFO: Created: latency-svc-r5s4s Mar 17 10:46:57.202: INFO: Got endpoints: latency-svc-r5s4s [672.747304ms] Mar 17 10:46:57.219: INFO: Created: latency-svc-8mhx7 Mar 17 10:46:57.259: INFO: Got endpoints: latency-svc-8mhx7 [690.902667ms] Mar 17 10:46:57.271: INFO: Created: latency-svc-m2c4l Mar 17 10:46:57.287: INFO: Got endpoints: latency-svc-m2c4l [688.315695ms] Mar 17 10:46:57.307: INFO: Created: latency-svc-bhwn9 Mar 17 10:46:57.317: INFO: Got endpoints: latency-svc-bhwn9 [619.885511ms] Mar 17 10:46:57.349: INFO: Created: latency-svc-cvxtf Mar 17 10:46:57.359: INFO: Got endpoints: latency-svc-cvxtf [652.289924ms] Mar 17 10:46:57.410: INFO: Created: latency-svc-89cqm Mar 17 10:46:57.440: INFO: Got endpoints: latency-svc-89cqm [696.474903ms] Mar 17 10:46:57.441: INFO: Created: latency-svc-nnp5l Mar 17 10:46:57.456: INFO: Got endpoints: latency-svc-nnp5l [676.222157ms] Mar 17 10:46:57.477: INFO: Created: latency-svc-bhbvq Mar 17 10:46:57.553: INFO: Got endpoints: latency-svc-bhbvq [721.159212ms] Mar 17 10:46:57.565: INFO: Created: latency-svc-7hvq2 Mar 17 10:46:57.576: INFO: Got endpoints: latency-svc-7hvq2 [706.008695ms] Mar 17 10:46:57.595: INFO: Created: latency-svc-jkr29 Mar 17 10:46:57.607: INFO: Got endpoints: latency-svc-jkr29 [706.114245ms] Mar 17 10:46:57.632: INFO: Created: latency-svc-8zzwp Mar 17 10:46:57.649: INFO: Got endpoints: latency-svc-8zzwp [658.586874ms] Mar 17 10:46:57.704: INFO: Created: latency-svc-l8lp8 Mar 17 10:46:57.708: INFO: Got endpoints: latency-svc-l8lp8 [659.02567ms] Mar 17 10:46:57.728: INFO: Created: latency-svc-59rd2 Mar 17 10:46:57.739: INFO: Got endpoints: latency-svc-59rd2 [652.424451ms] Mar 17 10:46:57.775: INFO: Created: latency-svc-5h6gs Mar 17 10:46:57.788: INFO: Got endpoints: latency-svc-5h6gs [657.128814ms] Mar 17 10:46:57.841: INFO: Created: latency-svc-c99gw Mar 17 10:46:57.845: INFO: Got endpoints: latency-svc-c99gw [672.96729ms] Mar 17 10:46:57.884: INFO: Created: latency-svc-kltdr Mar 17 10:46:57.896: INFO: Got endpoints: latency-svc-kltdr [694.277154ms] Mar 17 10:46:57.914: INFO: Created: latency-svc-mncvg Mar 17 10:46:57.938: INFO: Got endpoints: latency-svc-mncvg [678.707903ms] Mar 17 10:46:57.986: INFO: Created: latency-svc-xvrgp Mar 17 10:46:58.009: INFO: Got endpoints: latency-svc-xvrgp [721.678679ms] Mar 17 10:46:58.063: INFO: Created: latency-svc-rvj6v Mar 17 10:46:58.077: INFO: Got endpoints: latency-svc-rvj6v [760.177754ms] Mar 17 10:46:58.123: INFO: Created: latency-svc-78z97 Mar 17 10:46:58.143: INFO: Got endpoints: latency-svc-78z97 [784.195378ms] Mar 17 10:46:58.160: INFO: Created: latency-svc-mmfrp Mar 17 10:46:58.190: INFO: Created: latency-svc-l8gnb Mar 17 10:46:58.244: INFO: Got endpoints: latency-svc-mmfrp [803.256576ms] Mar 17 10:46:58.246: INFO: Got endpoints: latency-svc-l8gnb [790.222838ms] Mar 17 10:46:58.262: INFO: Created: latency-svc-mllpg Mar 17 10:46:58.276: INFO: Got endpoints: latency-svc-mllpg [723.153566ms] Mar 17 10:46:58.296: INFO: Created: latency-svc-9cpcx Mar 17 10:46:58.307: INFO: Got endpoints: latency-svc-9cpcx [730.609884ms] Mar 17 10:46:58.327: INFO: Created: latency-svc-nkb76 Mar 17 10:46:58.337: INFO: Got endpoints: latency-svc-nkb76 [730.10629ms] Mar 17 10:46:58.337: INFO: Latencies: [58.987253ms 104.244846ms 131.514742ms 167.787914ms 260.229545ms 288.137237ms 330.51817ms 403.503663ms 415.444571ms 451.709808ms 481.776617ms 528.785768ms 560.61211ms 598.740696ms 619.885511ms 633.206747ms 633.443289ms 638.724771ms 643.711761ms 648.356876ms 650.861308ms 651.431171ms 651.974954ms 652.289924ms 652.424451ms 653.468868ms 655.140675ms 657.128814ms 657.593267ms 657.964806ms 658.586874ms 659.02567ms 659.10136ms 659.198321ms 660.764143ms 661.27952ms 663.353893ms 664.306394ms 664.655738ms 670.058124ms 670.367991ms 670.687316ms 671.303506ms 672.008687ms 672.64314ms 672.747304ms 672.96729ms 673.404747ms 675.367343ms 675.843406ms 676.222157ms 678.707903ms 680.371374ms 681.686242ms 681.689882ms 682.21303ms 682.494913ms 683.220736ms 686.721498ms 688.315695ms 688.391251ms 690.023205ms 690.902667ms 691.24307ms 692.673789ms 693.688884ms 694.277154ms 694.429122ms 695.244128ms 695.821065ms 696.474903ms 697.10455ms 697.97267ms 698.915146ms 699.379272ms 699.920322ms 699.991737ms 700.063302ms 700.374494ms 701.956376ms 702.706326ms 703.208066ms 705.831123ms 705.882963ms 706.005926ms 706.008695ms 706.114245ms 706.331623ms 706.390144ms 706.934683ms 707.347653ms 707.389678ms 709.848784ms 710.177352ms 711.93575ms 712.021419ms 712.13544ms 714.196691ms 716.427023ms 716.618025ms 718.101376ms 718.267812ms 718.485931ms 721.159212ms 721.678679ms 722.341958ms 722.493252ms 723.153566ms 723.174261ms 724.055628ms 724.338527ms 726.709966ms 727.203896ms 729.533201ms 729.806846ms 730.10629ms 730.221352ms 730.412694ms 730.609884ms 732.427394ms 733.817386ms 735.803202ms 738.027075ms 739.600131ms 740.050905ms 745.553703ms 746.476915ms 747.667011ms 747.925104ms 748.368598ms 749.18764ms 751.692859ms 753.821523ms 754.319882ms 754.811592ms 760.177754ms 760.605583ms 762.005408ms 764.450991ms 765.708744ms 766.255244ms 770.282787ms 772.005708ms 772.922364ms 774.306353ms 777.916427ms 778.016178ms 778.211645ms 778.282715ms 778.770054ms 780.018957ms 781.401513ms 783.835152ms 784.195378ms 789.848641ms 790.173073ms 790.222838ms 790.367413ms 792.551186ms 793.805322ms 796.689192ms 803.256576ms 807.402289ms 814.67366ms 828.532136ms 831.590931ms 832.834274ms 835.233449ms 837.762498ms 843.697398ms 854.365665ms 879.354705ms 884.182915ms 907.267337ms 910.386638ms 927.5575ms 932.761702ms 937.638777ms 939.460708ms 957.897052ms 960.469179ms 966.448623ms 976.026771ms 981.82235ms 999.90683ms 1.008570109s 1.01190548s 1.130799461s 1.154786312s 1.155223278s 1.156510876s 1.172699744s 1.173276932s 1.175820865s 1.178844899s 1.184525726s 1.193751568s 1.193805612s 1.204277222s 1.254415748s] Mar 17 10:46:58.337: INFO: 50 %ile: 718.101376ms Mar 17 10:46:58.337: INFO: 90 %ile: 960.469179ms Mar 17 10:46:58.337: INFO: 99 %ile: 1.204277222s Mar 17 10:46:58.337: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:46:58.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-j5l46" for this suite. Mar 17 10:47:22.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:47:22.465: INFO: namespace: e2e-tests-svc-latency-j5l46, resource: bindings, ignored listing per whitelist Mar 17 10:47:22.528: INFO: namespace e2e-tests-svc-latency-j5l46 deletion completed in 24.153953802s • [SLOW TEST:38.749 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:47:22.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 17 10:47:27.203: INFO: Successfully updated pod "labelsupdateaeb3ee65-683c-11ea-b08f-0242ac11000f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:47:29.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-52lvb" for this suite. Mar 17 10:47:51.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:47:51.287: INFO: namespace: e2e-tests-projected-52lvb, resource: bindings, ignored listing per whitelist Mar 17 10:47:51.327: INFO: namespace e2e-tests-projected-52lvb deletion completed in 22.098170013s • [SLOW TEST:28.798 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:47:51.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Mar 17 10:47:51.427: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Mar 17 10:47:51.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qbwhn' Mar 17 10:47:54.049: INFO: stderr: "" Mar 17 10:47:54.049: INFO: stdout: "service/redis-slave created\n" Mar 17 10:47:54.049: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Mar 17 10:47:54.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qbwhn' Mar 17 10:47:54.359: INFO: stderr: "" Mar 17 10:47:54.359: INFO: stdout: "service/redis-master created\n" Mar 17 10:47:54.360: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 17 10:47:54.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qbwhn' Mar 17 10:47:54.609: INFO: stderr: "" Mar 17 10:47:54.609: INFO: stdout: "service/frontend created\n" Mar 17 10:47:54.610: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Mar 17 10:47:54.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qbwhn' Mar 17 10:47:54.860: INFO: stderr: "" Mar 17 10:47:54.860: INFO: stdout: "deployment.extensions/frontend created\n" Mar 17 10:47:54.861: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 17 10:47:54.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qbwhn' Mar 17 10:47:55.193: INFO: stderr: "" Mar 17 10:47:55.193: INFO: stdout: "deployment.extensions/redis-master created\n" Mar 17 10:47:55.194: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Mar 17 10:47:55.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qbwhn' Mar 17 10:47:55.485: INFO: stderr: "" Mar 17 10:47:55.485: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Mar 17 10:47:55.485: INFO: Waiting for all frontend pods to be Running. Mar 17 10:48:05.536: INFO: Waiting for frontend to serve content. Mar 17 10:48:05.554: INFO: Trying to add a new entry to the guestbook. Mar 17 10:48:05.569: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 17 10:48:05.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qbwhn' Mar 17 10:48:05.755: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 17 10:48:05.756: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Mar 17 10:48:05.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qbwhn' Mar 17 10:48:05.918: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 17 10:48:05.918: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 17 10:48:05.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qbwhn' Mar 17 10:48:06.064: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 17 10:48:06.064: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 17 10:48:06.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qbwhn' Mar 17 10:48:06.186: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 17 10:48:06.186: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 17 10:48:06.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qbwhn' Mar 17 10:48:06.299: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 17 10:48:06.299: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 17 10:48:06.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qbwhn' Mar 17 10:48:06.587: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 17 10:48:06.587: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:48:06.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qbwhn" for this suite. Mar 17 10:48:44.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:48:44.787: INFO: namespace: e2e-tests-kubectl-qbwhn, resource: bindings, ignored listing per whitelist Mar 17 10:48:44.884: INFO: namespace e2e-tests-kubectl-qbwhn deletion completed in 38.288694812s • [SLOW TEST:53.557 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:48:44.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 17 10:48:45.005: INFO: Waiting up to 5m0s for pod "pod-dfc6e1f2-683c-11ea-b08f-0242ac11000f" in namespace "e2e-tests-emptydir-5ln9q" to be "success or failure" Mar 17 10:48:45.009: INFO: Pod "pod-dfc6e1f2-683c-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.935679ms Mar 17 10:48:47.012: INFO: Pod "pod-dfc6e1f2-683c-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0075054s Mar 17 10:48:49.016: INFO: Pod "pod-dfc6e1f2-683c-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011623357s STEP: Saw pod success Mar 17 10:48:49.016: INFO: Pod "pod-dfc6e1f2-683c-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 10:48:49.019: INFO: Trying to get logs from node hunter-worker pod pod-dfc6e1f2-683c-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 10:48:49.040: INFO: Waiting for pod pod-dfc6e1f2-683c-11ea-b08f-0242ac11000f to disappear Mar 17 10:48:49.044: INFO: Pod pod-dfc6e1f2-683c-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:48:49.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5ln9q" for this suite. Mar 17 10:48:55.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:48:55.098: INFO: namespace: e2e-tests-emptydir-5ln9q, resource: bindings, ignored listing per whitelist Mar 17 10:48:55.139: INFO: namespace e2e-tests-emptydir-5ln9q deletion completed in 6.091781256s • [SLOW TEST:10.255 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:48:55.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-e5de3ae6-683c-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 10:48:55.247: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e5e13288-683c-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-9crfl" to be "success or failure" Mar 17 10:48:55.250: INFO: Pod "pod-projected-configmaps-e5e13288-683c-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.644894ms Mar 17 10:48:57.254: INFO: Pod "pod-projected-configmaps-e5e13288-683c-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007396396s Mar 17 10:48:59.258: INFO: Pod "pod-projected-configmaps-e5e13288-683c-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01177603s STEP: Saw pod success Mar 17 10:48:59.258: INFO: Pod "pod-projected-configmaps-e5e13288-683c-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 10:48:59.261: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-e5e13288-683c-11ea-b08f-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 17 10:48:59.281: INFO: Waiting for pod pod-projected-configmaps-e5e13288-683c-11ea-b08f-0242ac11000f to disappear Mar 17 10:48:59.306: INFO: Pod pod-projected-configmaps-e5e13288-683c-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:48:59.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9crfl" for this suite. Mar 17 10:49:05.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:49:05.369: INFO: namespace: e2e-tests-projected-9crfl, resource: bindings, ignored listing per whitelist Mar 17 10:49:05.407: INFO: namespace e2e-tests-projected-9crfl deletion completed in 6.097464391s • [SLOW TEST:10.268 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:49:05.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-m7pq STEP: Creating a pod to test atomic-volume-subpath Mar 17 10:49:05.555: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-m7pq" in namespace "e2e-tests-subpath-4954x" to be "success or failure" Mar 17 10:49:05.562: INFO: Pod "pod-subpath-test-configmap-m7pq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.294399ms Mar 17 10:49:07.565: INFO: Pod "pod-subpath-test-configmap-m7pq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009715768s Mar 17 10:49:09.638: INFO: Pod "pod-subpath-test-configmap-m7pq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082697431s Mar 17 10:49:11.641: INFO: Pod "pod-subpath-test-configmap-m7pq": Phase="Running", Reason="", readiness=false. Elapsed: 6.085587514s Mar 17 10:49:13.645: INFO: Pod "pod-subpath-test-configmap-m7pq": Phase="Running", Reason="", readiness=false. Elapsed: 8.089346468s Mar 17 10:49:15.649: INFO: Pod "pod-subpath-test-configmap-m7pq": Phase="Running", Reason="", readiness=false. Elapsed: 10.093121754s Mar 17 10:49:17.652: INFO: Pod "pod-subpath-test-configmap-m7pq": Phase="Running", Reason="", readiness=false. Elapsed: 12.096595941s Mar 17 10:49:19.656: INFO: Pod "pod-subpath-test-configmap-m7pq": Phase="Running", Reason="", readiness=false. Elapsed: 14.100538676s Mar 17 10:49:21.670: INFO: Pod "pod-subpath-test-configmap-m7pq": Phase="Running", Reason="", readiness=false. Elapsed: 16.114421207s Mar 17 10:49:23.674: INFO: Pod "pod-subpath-test-configmap-m7pq": Phase="Running", Reason="", readiness=false. Elapsed: 18.11842494s Mar 17 10:49:25.678: INFO: Pod "pod-subpath-test-configmap-m7pq": Phase="Running", Reason="", readiness=false. Elapsed: 20.122646092s Mar 17 10:49:27.682: INFO: Pod "pod-subpath-test-configmap-m7pq": Phase="Running", Reason="", readiness=false. Elapsed: 22.12661378s Mar 17 10:49:29.705: INFO: Pod "pod-subpath-test-configmap-m7pq": Phase="Running", Reason="", readiness=false. Elapsed: 24.149709936s Mar 17 10:49:31.712: INFO: Pod "pod-subpath-test-configmap-m7pq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.156844768s STEP: Saw pod success Mar 17 10:49:31.712: INFO: Pod "pod-subpath-test-configmap-m7pq" satisfied condition "success or failure" Mar 17 10:49:31.718: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-m7pq container test-container-subpath-configmap-m7pq: STEP: delete the pod Mar 17 10:49:31.741: INFO: Waiting for pod pod-subpath-test-configmap-m7pq to disappear Mar 17 10:49:31.754: INFO: Pod pod-subpath-test-configmap-m7pq no longer exists STEP: Deleting pod pod-subpath-test-configmap-m7pq Mar 17 10:49:31.754: INFO: Deleting pod "pod-subpath-test-configmap-m7pq" in namespace "e2e-tests-subpath-4954x" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:49:31.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-4954x" for this suite. Mar 17 10:49:37.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:49:37.812: INFO: namespace: e2e-tests-subpath-4954x, resource: bindings, ignored listing per whitelist Mar 17 10:49:37.846: INFO: namespace e2e-tests-subpath-4954x deletion completed in 6.087693441s • [SLOW TEST:32.439 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:49:37.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:49:43.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-sjbgb" for this suite. Mar 17 10:50:05.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:50:05.139: INFO: namespace: e2e-tests-replication-controller-sjbgb, resource: bindings, ignored listing per whitelist Mar 17 10:50:05.142: INFO: namespace e2e-tests-replication-controller-sjbgb deletion completed in 22.114937923s • [SLOW TEST:27.295 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:50:05.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-0f9a4aa7-683d-11ea-b08f-0242ac11000f STEP: Creating the pod STEP: Updating configmap configmap-test-upd-0f9a4aa7-683d-11ea-b08f-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:51:13.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fb7mk" for this suite. Mar 17 10:51:35.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:51:35.642: INFO: namespace: e2e-tests-configmap-fb7mk, resource: bindings, ignored listing per whitelist Mar 17 10:51:35.664: INFO: namespace e2e-tests-configmap-fb7mk deletion completed in 22.096332132s • [SLOW TEST:90.522 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:51:35.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 10:51:35.796: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 17 10:51:40.800: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 17 10:51:40.801: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 17 10:51:40.836: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-zlqrt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zlqrt/deployments/test-cleanup-deployment,UID:489288cb-683d-11ea-99e8-0242ac110002,ResourceVersion:310523,Generation:1,CreationTimestamp:2020-03-17 10:51:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Mar 17 10:51:40.841: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Mar 17 10:51:40.841: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 17 10:51:40.841: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-zlqrt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zlqrt/replicasets/test-cleanup-controller,UID:45934cec-683d-11ea-99e8-0242ac110002,ResourceVersion:310524,Generation:1,CreationTimestamp:2020-03-17 10:51:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 489288cb-683d-11ea-99e8-0242ac110002 0xc001c66f47 0xc001c66f48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 17 10:51:40.848: INFO: Pod "test-cleanup-controller-h7ptn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-h7ptn,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-zlqrt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zlqrt/pods/test-cleanup-controller-h7ptn,UID:45969fcd-683d-11ea-99e8-0242ac110002,ResourceVersion:310518,Generation:0,CreationTimestamp:2020-03-17 10:51:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 45934cec-683d-11ea-99e8-0242ac110002 0xc000e56027 0xc000e56028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n8c5f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n8c5f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n8c5f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000e560a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000e560c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 10:51:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 10:51:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 10:51:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 10:51:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.53,StartTime:2020-03-17 10:51:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-17 10:51:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bc8ad41f1b77c1603bfc1321e3333a5e598f5a83d7c81274342db3541b4d68f9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:51:40.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-zlqrt" for this suite. Mar 17 10:51:46.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:51:47.023: INFO: namespace: e2e-tests-deployment-zlqrt, resource: bindings, ignored listing per whitelist Mar 17 10:51:47.062: INFO: namespace e2e-tests-deployment-zlqrt deletion completed in 6.15298858s • [SLOW TEST:11.398 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:51:47.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4c5ea2d7-683d-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume secrets Mar 17 10:51:47.200: INFO: Waiting up to 5m0s for pod "pod-secrets-4c5f5018-683d-11ea-b08f-0242ac11000f" in namespace "e2e-tests-secrets-xtfjh" to be "success or failure" Mar 17 10:51:47.206: INFO: Pod "pod-secrets-4c5f5018-683d-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.481553ms Mar 17 10:51:49.210: INFO: Pod "pod-secrets-4c5f5018-683d-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009946433s Mar 17 10:51:51.213: INFO: Pod "pod-secrets-4c5f5018-683d-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013540963s STEP: Saw pod success Mar 17 10:51:51.213: INFO: Pod "pod-secrets-4c5f5018-683d-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 10:51:51.216: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-4c5f5018-683d-11ea-b08f-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 17 10:51:51.247: INFO: Waiting for pod pod-secrets-4c5f5018-683d-11ea-b08f-0242ac11000f to disappear Mar 17 10:51:51.257: INFO: Pod pod-secrets-4c5f5018-683d-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:51:51.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xtfjh" for this suite. Mar 17 10:51:57.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:51:57.336: INFO: namespace: e2e-tests-secrets-xtfjh, resource: bindings, ignored listing per whitelist Mar 17 10:51:57.357: INFO: namespace e2e-tests-secrets-xtfjh deletion completed in 6.080773568s • [SLOW TEST:10.295 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:51:57.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 17 10:51:57.457: INFO: Waiting up to 5m0s for pod "downward-api-527e3d5b-683d-11ea-b08f-0242ac11000f" in namespace "e2e-tests-downward-api-8qntc" to be "success or failure" Mar 17 10:51:57.498: INFO: Pod "downward-api-527e3d5b-683d-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 40.353431ms Mar 17 10:51:59.516: INFO: Pod "downward-api-527e3d5b-683d-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058191077s Mar 17 10:52:01.520: INFO: Pod "downward-api-527e3d5b-683d-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062488654s STEP: Saw pod success Mar 17 10:52:01.520: INFO: Pod "downward-api-527e3d5b-683d-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 10:52:01.523: INFO: Trying to get logs from node hunter-worker2 pod downward-api-527e3d5b-683d-11ea-b08f-0242ac11000f container dapi-container: STEP: delete the pod Mar 17 10:52:01.547: INFO: Waiting for pod downward-api-527e3d5b-683d-11ea-b08f-0242ac11000f to disappear Mar 17 10:52:01.557: INFO: Pod downward-api-527e3d5b-683d-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:52:01.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8qntc" for this suite. Mar 17 10:52:07.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:52:07.676: INFO: namespace: e2e-tests-downward-api-8qntc, resource: bindings, ignored listing per whitelist Mar 17 10:52:07.682: INFO: namespace e2e-tests-downward-api-8qntc deletion completed in 6.118462971s • [SLOW TEST:10.324 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:52:07.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 17 10:52:07.796: INFO: Waiting up to 5m0s for pod "pod-58a6a354-683d-11ea-b08f-0242ac11000f" in namespace "e2e-tests-emptydir-5l62g" to be "success or failure" Mar 17 10:52:07.800: INFO: Pod "pod-58a6a354-683d-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15928ms Mar 17 10:52:09.804: INFO: Pod "pod-58a6a354-683d-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008664835s Mar 17 10:52:11.809: INFO: Pod "pod-58a6a354-683d-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013195097s STEP: Saw pod success Mar 17 10:52:11.809: INFO: Pod "pod-58a6a354-683d-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 10:52:11.812: INFO: Trying to get logs from node hunter-worker pod pod-58a6a354-683d-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 10:52:11.831: INFO: Waiting for pod pod-58a6a354-683d-11ea-b08f-0242ac11000f to disappear Mar 17 10:52:11.881: INFO: Pod pod-58a6a354-683d-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:52:11.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5l62g" for this suite. Mar 17 10:52:17.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:52:17.981: INFO: namespace: e2e-tests-emptydir-5l62g, resource: bindings, ignored listing per whitelist Mar 17 10:52:17.989: INFO: namespace e2e-tests-emptydir-5l62g deletion completed in 6.103421782s • [SLOW TEST:10.307 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:52:17.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 10:52:18.116: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ece765a-683d-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-zrf27" to be "success or failure" Mar 17 10:52:18.136: INFO: Pod "downwardapi-volume-5ece765a-683d-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.252793ms Mar 17 10:52:20.140: INFO: Pod "downwardapi-volume-5ece765a-683d-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024247846s Mar 17 10:52:22.144: INFO: Pod "downwardapi-volume-5ece765a-683d-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028363526s STEP: Saw pod success Mar 17 10:52:22.144: INFO: Pod "downwardapi-volume-5ece765a-683d-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 10:52:22.147: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-5ece765a-683d-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 10:52:22.185: INFO: Waiting for pod downwardapi-volume-5ece765a-683d-11ea-b08f-0242ac11000f to disappear Mar 17 10:52:22.198: INFO: Pod downwardapi-volume-5ece765a-683d-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:52:22.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zrf27" for this suite. Mar 17 10:52:28.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:52:28.292: INFO: namespace: e2e-tests-projected-zrf27, resource: bindings, ignored listing per whitelist Mar 17 10:52:28.298: INFO: namespace e2e-tests-projected-zrf27 deletion completed in 6.096202205s • [SLOW TEST:10.308 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:52:28.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Mar 17 10:52:28.415: INFO: namespace e2e-tests-kubectl-rjfxf Mar 17 10:52:28.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rjfxf' Mar 17 10:52:28.671: INFO: stderr: "" Mar 17 10:52:28.671: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 17 10:52:29.726: INFO: Selector matched 1 pods for map[app:redis] Mar 17 10:52:29.726: INFO: Found 0 / 1 Mar 17 10:52:30.675: INFO: Selector matched 1 pods for map[app:redis] Mar 17 10:52:30.675: INFO: Found 0 / 1 Mar 17 10:52:31.678: INFO: Selector matched 1 pods for map[app:redis] Mar 17 10:52:31.678: INFO: Found 0 / 1 Mar 17 10:52:32.675: INFO: Selector matched 1 pods for map[app:redis] Mar 17 10:52:32.675: INFO: Found 1 / 1 Mar 17 10:52:32.675: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 17 10:52:32.679: INFO: Selector matched 1 pods for map[app:redis] Mar 17 10:52:32.679: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 17 10:52:32.679: INFO: wait on redis-master startup in e2e-tests-kubectl-rjfxf Mar 17 10:52:32.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-j5phw redis-master --namespace=e2e-tests-kubectl-rjfxf' Mar 17 10:52:32.794: INFO: stderr: "" Mar 17 10:52:32.794: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 17 Mar 10:52:30.993 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Mar 10:52:30.993 # Server started, Redis version 3.2.12\n1:M 17 Mar 10:52:30.993 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Mar 10:52:30.993 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Mar 17 10:52:32.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-rjfxf' Mar 17 10:52:32.938: INFO: stderr: "" Mar 17 10:52:32.938: INFO: stdout: "service/rm2 exposed\n" Mar 17 10:52:32.953: INFO: Service rm2 in namespace e2e-tests-kubectl-rjfxf found. STEP: exposing service Mar 17 10:52:34.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-rjfxf' Mar 17 10:52:35.133: INFO: stderr: "" Mar 17 10:52:35.133: INFO: stdout: "service/rm3 exposed\n" Mar 17 10:52:35.136: INFO: Service rm3 in namespace e2e-tests-kubectl-rjfxf found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:52:37.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rjfxf" for this suite. Mar 17 10:52:59.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:52:59.224: INFO: namespace: e2e-tests-kubectl-rjfxf, resource: bindings, ignored listing per whitelist Mar 17 10:52:59.281: INFO: namespace e2e-tests-kubectl-rjfxf deletion completed in 22.131886531s • [SLOW TEST:30.984 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:52:59.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-pg749 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Mar 17 10:52:59.423: INFO: Found 0 stateful pods, waiting for 3 Mar 17 10:53:09.429: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 17 10:53:09.429: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 17 10:53:09.429: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 17 10:53:09.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pg749 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 17 10:53:09.694: INFO: stderr: "I0317 10:53:09.566530 411 log.go:172] (0xc000150840) (0xc000768640) Create stream\nI0317 10:53:09.566592 411 log.go:172] (0xc000150840) (0xc000768640) Stream added, broadcasting: 1\nI0317 10:53:09.569081 411 log.go:172] (0xc000150840) Reply frame received for 1\nI0317 10:53:09.569247 411 log.go:172] (0xc000150840) (0xc0007686e0) Create stream\nI0317 10:53:09.569267 411 log.go:172] (0xc000150840) (0xc0007686e0) Stream added, broadcasting: 3\nI0317 10:53:09.570261 411 log.go:172] (0xc000150840) Reply frame received for 3\nI0317 10:53:09.570304 411 log.go:172] (0xc000150840) (0xc0006b6be0) Create stream\nI0317 10:53:09.570317 411 log.go:172] (0xc000150840) (0xc0006b6be0) Stream added, broadcasting: 5\nI0317 10:53:09.571224 411 log.go:172] (0xc000150840) Reply frame received for 5\nI0317 10:53:09.687217 411 log.go:172] (0xc000150840) Data frame received for 3\nI0317 10:53:09.687246 411 log.go:172] (0xc0007686e0) (3) Data frame handling\nI0317 10:53:09.687267 411 log.go:172] (0xc0007686e0) (3) Data frame sent\nI0317 10:53:09.687279 411 log.go:172] (0xc000150840) Data frame received for 3\nI0317 10:53:09.687289 411 log.go:172] (0xc0007686e0) (3) Data frame handling\nI0317 10:53:09.687537 411 log.go:172] (0xc000150840) Data frame received for 5\nI0317 10:53:09.687562 411 log.go:172] (0xc0006b6be0) (5) Data frame handling\nI0317 10:53:09.689972 411 log.go:172] (0xc000150840) Data frame received for 1\nI0317 10:53:09.689993 411 log.go:172] (0xc000768640) (1) Data frame handling\nI0317 10:53:09.690011 411 log.go:172] (0xc000768640) (1) Data frame sent\nI0317 10:53:09.690034 411 log.go:172] (0xc000150840) (0xc000768640) Stream removed, broadcasting: 1\nI0317 10:53:09.690128 411 log.go:172] (0xc000150840) Go away received\nI0317 10:53:09.690215 411 log.go:172] (0xc000150840) (0xc000768640) Stream removed, broadcasting: 1\nI0317 10:53:09.690235 411 log.go:172] (0xc000150840) (0xc0007686e0) Stream removed, broadcasting: 3\nI0317 10:53:09.690250 411 log.go:172] (0xc000150840) (0xc0006b6be0) Stream removed, broadcasting: 5\n" Mar 17 10:53:09.694: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 17 10:53:09.694: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 17 10:53:19.731: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 17 10:53:29.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pg749 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 10:53:29.988: INFO: stderr: "I0317 10:53:29.888899 433 log.go:172] (0xc00078e0b0) (0xc0001c0820) Create stream\nI0317 10:53:29.888959 433 log.go:172] (0xc00078e0b0) (0xc0001c0820) Stream added, broadcasting: 1\nI0317 10:53:29.891492 433 log.go:172] (0xc00078e0b0) Reply frame received for 1\nI0317 10:53:29.891533 433 log.go:172] (0xc00078e0b0) (0xc000314d20) Create stream\nI0317 10:53:29.891544 433 log.go:172] (0xc00078e0b0) (0xc000314d20) Stream added, broadcasting: 3\nI0317 10:53:29.892635 433 log.go:172] (0xc00078e0b0) Reply frame received for 3\nI0317 10:53:29.892696 433 log.go:172] (0xc00078e0b0) (0xc0001c08c0) Create stream\nI0317 10:53:29.892714 433 log.go:172] (0xc00078e0b0) (0xc0001c08c0) Stream added, broadcasting: 5\nI0317 10:53:29.893810 433 log.go:172] (0xc00078e0b0) Reply frame received for 5\nI0317 10:53:29.982062 433 log.go:172] (0xc00078e0b0) Data frame received for 3\nI0317 10:53:29.982121 433 log.go:172] (0xc000314d20) (3) Data frame handling\nI0317 10:53:29.982140 433 log.go:172] (0xc000314d20) (3) Data frame sent\nI0317 10:53:29.982152 433 log.go:172] (0xc00078e0b0) Data frame received for 3\nI0317 10:53:29.982163 433 log.go:172] (0xc000314d20) (3) Data frame handling\nI0317 10:53:29.982203 433 log.go:172] (0xc00078e0b0) Data frame received for 5\nI0317 10:53:29.982252 433 log.go:172] (0xc0001c08c0) (5) Data frame handling\nI0317 10:53:29.983764 433 log.go:172] (0xc00078e0b0) Data frame received for 1\nI0317 10:53:29.983785 433 log.go:172] (0xc0001c0820) (1) Data frame handling\nI0317 10:53:29.983798 433 log.go:172] (0xc0001c0820) (1) Data frame sent\nI0317 10:53:29.983816 433 log.go:172] (0xc00078e0b0) (0xc0001c0820) Stream removed, broadcasting: 1\nI0317 10:53:29.983889 433 log.go:172] (0xc00078e0b0) Go away received\nI0317 10:53:29.983987 433 log.go:172] (0xc00078e0b0) (0xc0001c0820) Stream removed, broadcasting: 1\nI0317 10:53:29.984007 433 log.go:172] (0xc00078e0b0) (0xc000314d20) Stream removed, broadcasting: 3\nI0317 10:53:29.984019 433 log.go:172] (0xc00078e0b0) (0xc0001c08c0) Stream removed, broadcasting: 5\n" Mar 17 10:53:29.988: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 17 10:53:29.988: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 17 10:53:40.022: INFO: Waiting for StatefulSet e2e-tests-statefulset-pg749/ss2 to complete update Mar 17 10:53:40.022: INFO: Waiting for Pod e2e-tests-statefulset-pg749/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 17 10:53:40.022: INFO: Waiting for Pod e2e-tests-statefulset-pg749/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 17 10:53:40.022: INFO: Waiting for Pod e2e-tests-statefulset-pg749/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 17 10:53:50.043: INFO: Waiting for StatefulSet e2e-tests-statefulset-pg749/ss2 to complete update Mar 17 10:53:50.043: INFO: Waiting for Pod e2e-tests-statefulset-pg749/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 17 10:53:50.043: INFO: Waiting for Pod e2e-tests-statefulset-pg749/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 17 10:54:00.030: INFO: Waiting for StatefulSet e2e-tests-statefulset-pg749/ss2 to complete update Mar 17 10:54:00.030: INFO: Waiting for Pod e2e-tests-statefulset-pg749/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Mar 17 10:54:10.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pg749 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 17 10:54:10.272: INFO: stderr: "I0317 10:54:10.157971 454 log.go:172] (0xc00014c6e0) (0xc000712640) Create stream\nI0317 10:54:10.158030 454 log.go:172] (0xc00014c6e0) (0xc000712640) Stream added, broadcasting: 1\nI0317 10:54:10.159891 454 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0317 10:54:10.159925 454 log.go:172] (0xc00014c6e0) (0xc0007c4c80) Create stream\nI0317 10:54:10.159935 454 log.go:172] (0xc00014c6e0) (0xc0007c4c80) Stream added, broadcasting: 3\nI0317 10:54:10.160670 454 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0317 10:54:10.160701 454 log.go:172] (0xc00014c6e0) (0xc0006a2000) Create stream\nI0317 10:54:10.160711 454 log.go:172] (0xc00014c6e0) (0xc0006a2000) Stream added, broadcasting: 5\nI0317 10:54:10.161646 454 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0317 10:54:10.265822 454 log.go:172] (0xc00014c6e0) Data frame received for 3\nI0317 10:54:10.265858 454 log.go:172] (0xc0007c4c80) (3) Data frame handling\nI0317 10:54:10.265880 454 log.go:172] (0xc0007c4c80) (3) Data frame sent\nI0317 10:54:10.265897 454 log.go:172] (0xc00014c6e0) Data frame received for 3\nI0317 10:54:10.265913 454 log.go:172] (0xc0007c4c80) (3) Data frame handling\nI0317 10:54:10.266192 454 log.go:172] (0xc00014c6e0) Data frame received for 5\nI0317 10:54:10.266240 454 log.go:172] (0xc0006a2000) (5) Data frame handling\nI0317 10:54:10.267824 454 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0317 10:54:10.267839 454 log.go:172] (0xc000712640) (1) Data frame handling\nI0317 10:54:10.267849 454 log.go:172] (0xc000712640) (1) Data frame sent\nI0317 10:54:10.268098 454 log.go:172] (0xc00014c6e0) (0xc000712640) Stream removed, broadcasting: 1\nI0317 10:54:10.268161 454 log.go:172] (0xc00014c6e0) Go away received\nI0317 10:54:10.268412 454 log.go:172] (0xc00014c6e0) (0xc000712640) Stream removed, broadcasting: 1\nI0317 10:54:10.268444 454 log.go:172] (0xc00014c6e0) (0xc0007c4c80) Stream removed, broadcasting: 3\nI0317 10:54:10.268461 454 log.go:172] (0xc00014c6e0) (0xc0006a2000) Stream removed, broadcasting: 5\n" Mar 17 10:54:10.272: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 17 10:54:10.272: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 17 10:54:20.305: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 17 10:54:30.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pg749 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 10:54:30.572: INFO: stderr: "I0317 10:54:30.471779 476 log.go:172] (0xc000138790) (0xc0007e35e0) Create stream\nI0317 10:54:30.471853 476 log.go:172] (0xc000138790) (0xc0007e35e0) Stream added, broadcasting: 1\nI0317 10:54:30.474462 476 log.go:172] (0xc000138790) Reply frame received for 1\nI0317 10:54:30.474510 476 log.go:172] (0xc000138790) (0xc0007e3680) Create stream\nI0317 10:54:30.474522 476 log.go:172] (0xc000138790) (0xc0007e3680) Stream added, broadcasting: 3\nI0317 10:54:30.475812 476 log.go:172] (0xc000138790) Reply frame received for 3\nI0317 10:54:30.475866 476 log.go:172] (0xc000138790) (0xc000584000) Create stream\nI0317 10:54:30.475882 476 log.go:172] (0xc000138790) (0xc000584000) Stream added, broadcasting: 5\nI0317 10:54:30.476890 476 log.go:172] (0xc000138790) Reply frame received for 5\nI0317 10:54:30.566633 476 log.go:172] (0xc000138790) Data frame received for 3\nI0317 10:54:30.566680 476 log.go:172] (0xc0007e3680) (3) Data frame handling\nI0317 10:54:30.566696 476 log.go:172] (0xc0007e3680) (3) Data frame sent\nI0317 10:54:30.566707 476 log.go:172] (0xc000138790) Data frame received for 3\nI0317 10:54:30.566717 476 log.go:172] (0xc0007e3680) (3) Data frame handling\nI0317 10:54:30.566758 476 log.go:172] (0xc000138790) Data frame received for 5\nI0317 10:54:30.566782 476 log.go:172] (0xc000584000) (5) Data frame handling\nI0317 10:54:30.568373 476 log.go:172] (0xc000138790) Data frame received for 1\nI0317 10:54:30.568399 476 log.go:172] (0xc0007e35e0) (1) Data frame handling\nI0317 10:54:30.568420 476 log.go:172] (0xc0007e35e0) (1) Data frame sent\nI0317 10:54:30.568439 476 log.go:172] (0xc000138790) (0xc0007e35e0) Stream removed, broadcasting: 1\nI0317 10:54:30.568500 476 log.go:172] (0xc000138790) Go away received\nI0317 10:54:30.568798 476 log.go:172] (0xc000138790) (0xc0007e35e0) Stream removed, broadcasting: 1\nI0317 10:54:30.568831 476 log.go:172] (0xc000138790) (0xc0007e3680) Stream removed, broadcasting: 3\nI0317 10:54:30.568846 476 log.go:172] (0xc000138790) (0xc000584000) Stream removed, broadcasting: 5\n" Mar 17 10:54:30.573: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 17 10:54:30.573: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 17 10:54:40.594: INFO: Waiting for StatefulSet e2e-tests-statefulset-pg749/ss2 to complete update Mar 17 10:54:40.594: INFO: Waiting for Pod e2e-tests-statefulset-pg749/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 17 10:54:40.594: INFO: Waiting for Pod e2e-tests-statefulset-pg749/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 17 10:54:40.594: INFO: Waiting for Pod e2e-tests-statefulset-pg749/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 17 10:54:50.602: INFO: Waiting for StatefulSet e2e-tests-statefulset-pg749/ss2 to complete update Mar 17 10:54:50.602: INFO: Waiting for Pod e2e-tests-statefulset-pg749/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 17 10:54:50.602: INFO: Waiting for Pod e2e-tests-statefulset-pg749/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 17 10:55:00.602: INFO: Waiting for StatefulSet e2e-tests-statefulset-pg749/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 17 10:55:10.602: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pg749 Mar 17 10:55:10.604: INFO: Scaling statefulset ss2 to 0 Mar 17 10:55:30.623: INFO: Waiting for statefulset status.replicas updated to 0 Mar 17 10:55:30.626: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:55:30.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-pg749" for this suite. Mar 17 10:55:36.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:55:36.739: INFO: namespace: e2e-tests-statefulset-pg749, resource: bindings, ignored listing per whitelist Mar 17 10:55:36.751: INFO: namespace e2e-tests-statefulset-pg749 deletion completed in 6.106759869s • [SLOW TEST:157.470 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:55:36.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 10:55:36.856: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5424f79-683d-11ea-b08f-0242ac11000f" in namespace "e2e-tests-downward-api-jkhx9" to be "success or failure" Mar 17 10:55:36.859: INFO: Pod "downwardapi-volume-d5424f79-683d-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.797097ms Mar 17 10:55:38.863: INFO: Pod "downwardapi-volume-d5424f79-683d-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006862011s Mar 17 10:55:40.867: INFO: Pod "downwardapi-volume-d5424f79-683d-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011031778s STEP: Saw pod success Mar 17 10:55:40.867: INFO: Pod "downwardapi-volume-d5424f79-683d-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 10:55:40.870: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-d5424f79-683d-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 10:55:40.891: INFO: Waiting for pod downwardapi-volume-d5424f79-683d-11ea-b08f-0242ac11000f to disappear Mar 17 10:55:40.895: INFO: Pod downwardapi-volume-d5424f79-683d-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:55:40.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jkhx9" for this suite. Mar 17 10:55:46.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:55:46.971: INFO: namespace: e2e-tests-downward-api-jkhx9, resource: bindings, ignored listing per whitelist Mar 17 10:55:47.012: INFO: namespace e2e-tests-downward-api-jkhx9 deletion completed in 6.113242022s • [SLOW TEST:10.260 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:55:47.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 17 10:55:47.124: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 17 10:55:52.128: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:55:53.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-cpnc8" for this suite. Mar 17 10:55:59.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:55:59.304: INFO: namespace: e2e-tests-replication-controller-cpnc8, resource: bindings, ignored listing per whitelist Mar 17 10:55:59.310: INFO: namespace e2e-tests-replication-controller-cpnc8 deletion completed in 6.147016618s • [SLOW TEST:12.299 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:55:59.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:56:03.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-pmgxt" for this suite. Mar 17 10:56:43.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:56:43.588: INFO: namespace: e2e-tests-kubelet-test-pmgxt, resource: bindings, ignored listing per whitelist Mar 17 10:56:43.622: INFO: namespace e2e-tests-kubelet-test-pmgxt deletion completed in 40.107558s • [SLOW TEST:44.311 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:56:43.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 17 10:56:44.295: INFO: Pod name wrapped-volume-race-fd6f4e4b-683d-11ea-b08f-0242ac11000f: Found 0 pods out of 5 Mar 17 10:56:49.305: INFO: Pod name wrapped-volume-race-fd6f4e4b-683d-11ea-b08f-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fd6f4e4b-683d-11ea-b08f-0242ac11000f in namespace e2e-tests-emptydir-wrapper-94vdm, will wait for the garbage collector to delete the pods Mar 17 10:58:31.386: INFO: Deleting ReplicationController wrapped-volume-race-fd6f4e4b-683d-11ea-b08f-0242ac11000f took: 6.845338ms Mar 17 10:58:31.586: INFO: Terminating ReplicationController wrapped-volume-race-fd6f4e4b-683d-11ea-b08f-0242ac11000f pods took: 200.301282ms STEP: Creating RC which spawns configmap-volume pods Mar 17 10:59:12.750: INFO: Pod name wrapped-volume-race-55e9cf52-683e-11ea-b08f-0242ac11000f: Found 0 pods out of 5 Mar 17 10:59:17.758: INFO: Pod name wrapped-volume-race-55e9cf52-683e-11ea-b08f-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-55e9cf52-683e-11ea-b08f-0242ac11000f in namespace e2e-tests-emptydir-wrapper-94vdm, will wait for the garbage collector to delete the pods Mar 17 11:01:31.840: INFO: Deleting ReplicationController wrapped-volume-race-55e9cf52-683e-11ea-b08f-0242ac11000f took: 7.555216ms Mar 17 11:01:31.940: INFO: Terminating ReplicationController wrapped-volume-race-55e9cf52-683e-11ea-b08f-0242ac11000f pods took: 100.217216ms STEP: Creating RC which spawns configmap-volume pods Mar 17 11:02:12.368: INFO: Pod name wrapped-volume-race-c0fece39-683e-11ea-b08f-0242ac11000f: Found 0 pods out of 5 Mar 17 11:02:17.376: INFO: Pod name wrapped-volume-race-c0fece39-683e-11ea-b08f-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c0fece39-683e-11ea-b08f-0242ac11000f in namespace e2e-tests-emptydir-wrapper-94vdm, will wait for the garbage collector to delete the pods Mar 17 11:04:09.461: INFO: Deleting ReplicationController wrapped-volume-race-c0fece39-683e-11ea-b08f-0242ac11000f took: 8.629202ms Mar 17 11:04:09.561: INFO: Terminating ReplicationController wrapped-volume-race-c0fece39-683e-11ea-b08f-0242ac11000f pods took: 100.232767ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:04:52.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-94vdm" for this suite. Mar 17 11:05:00.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:05:00.962: INFO: namespace: e2e-tests-emptydir-wrapper-94vdm, resource: bindings, ignored listing per whitelist Mar 17 11:05:00.995: INFO: namespace e2e-tests-emptydir-wrapper-94vdm deletion completed in 8.084636139s • [SLOW TEST:497.373 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:05:00.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Mar 17 11:05:01.114: INFO: Waiting up to 5m0s for pod "pod-25953e43-683f-11ea-b08f-0242ac11000f" in namespace "e2e-tests-emptydir-m9zqd" to be "success or failure" Mar 17 11:05:01.118: INFO: Pod "pod-25953e43-683f-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189463ms Mar 17 11:05:03.122: INFO: Pod "pod-25953e43-683f-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008031102s Mar 17 11:05:05.126: INFO: Pod "pod-25953e43-683f-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011846553s STEP: Saw pod success Mar 17 11:05:05.126: INFO: Pod "pod-25953e43-683f-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:05:05.128: INFO: Trying to get logs from node hunter-worker2 pod pod-25953e43-683f-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 11:05:05.150: INFO: Waiting for pod pod-25953e43-683f-11ea-b08f-0242ac11000f to disappear Mar 17 11:05:05.154: INFO: Pod pod-25953e43-683f-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:05:05.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-m9zqd" for this suite. Mar 17 11:05:11.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:05:11.287: INFO: namespace: e2e-tests-emptydir-m9zqd, resource: bindings, ignored listing per whitelist Mar 17 11:05:11.291: INFO: namespace e2e-tests-emptydir-m9zqd deletion completed in 6.133311157s • [SLOW TEST:10.296 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:05:11.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-2bbe6135-683f-11ea-b08f-0242ac11000f STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-2bbe6135-683f-11ea-b08f-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:06:27.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qpx5c" for this suite. Mar 17 11:06:49.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:06:49.849: INFO: namespace: e2e-tests-projected-qpx5c, resource: bindings, ignored listing per whitelist Mar 17 11:06:49.900: INFO: namespace e2e-tests-projected-qpx5c deletion completed in 22.092900374s • [SLOW TEST:98.609 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:06:49.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 11:06:50.008: INFO: Waiting up to 5m0s for pod "downwardapi-volume-667e30fa-683f-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-cjwsk" to be "success or failure" Mar 17 11:06:50.019: INFO: Pod "downwardapi-volume-667e30fa-683f-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.059506ms Mar 17 11:06:52.037: INFO: Pod "downwardapi-volume-667e30fa-683f-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029034549s Mar 17 11:06:54.041: INFO: Pod "downwardapi-volume-667e30fa-683f-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033134291s STEP: Saw pod success Mar 17 11:06:54.042: INFO: Pod "downwardapi-volume-667e30fa-683f-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:06:54.044: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-667e30fa-683f-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 11:06:54.063: INFO: Waiting for pod downwardapi-volume-667e30fa-683f-11ea-b08f-0242ac11000f to disappear Mar 17 11:06:54.109: INFO: Pod downwardapi-volume-667e30fa-683f-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:06:54.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cjwsk" for this suite. Mar 17 11:07:00.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:07:00.186: INFO: namespace: e2e-tests-projected-cjwsk, resource: bindings, ignored listing per whitelist Mar 17 11:07:00.208: INFO: namespace e2e-tests-projected-cjwsk deletion completed in 6.095068287s • [SLOW TEST:10.307 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:07:00.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-6ca54e2b-683f-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 11:07:00.376: INFO: Waiting up to 5m0s for pod "pod-configmaps-6cac150a-683f-11ea-b08f-0242ac11000f" in namespace "e2e-tests-configmap-k8wvh" to be "success or failure" Mar 17 11:07:00.392: INFO: Pod "pod-configmaps-6cac150a-683f-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.276511ms Mar 17 11:07:02.396: INFO: Pod "pod-configmaps-6cac150a-683f-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020650306s Mar 17 11:07:04.401: INFO: Pod "pod-configmaps-6cac150a-683f-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025047033s STEP: Saw pod success Mar 17 11:07:04.401: INFO: Pod "pod-configmaps-6cac150a-683f-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:07:04.404: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-6cac150a-683f-11ea-b08f-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 17 11:07:04.488: INFO: Waiting for pod pod-configmaps-6cac150a-683f-11ea-b08f-0242ac11000f to disappear Mar 17 11:07:04.492: INFO: Pod pod-configmaps-6cac150a-683f-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:07:04.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-k8wvh" for this suite. Mar 17 11:07:10.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:07:10.569: INFO: namespace: e2e-tests-configmap-k8wvh, resource: bindings, ignored listing per whitelist Mar 17 11:07:10.590: INFO: namespace e2e-tests-configmap-k8wvh deletion completed in 6.094584178s • [SLOW TEST:10.382 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:07:10.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 17 11:07:10.673: INFO: Waiting up to 5m0s for pod "pod-72cfc7cd-683f-11ea-b08f-0242ac11000f" in namespace "e2e-tests-emptydir-cx5q7" to be "success or failure" Mar 17 11:07:10.688: INFO: Pod "pod-72cfc7cd-683f-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.689213ms Mar 17 11:07:12.692: INFO: Pod "pod-72cfc7cd-683f-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018518731s Mar 17 11:07:14.696: INFO: Pod "pod-72cfc7cd-683f-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022497968s STEP: Saw pod success Mar 17 11:07:14.696: INFO: Pod "pod-72cfc7cd-683f-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:07:14.699: INFO: Trying to get logs from node hunter-worker2 pod pod-72cfc7cd-683f-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 11:07:14.738: INFO: Waiting for pod pod-72cfc7cd-683f-11ea-b08f-0242ac11000f to disappear Mar 17 11:07:14.762: INFO: Pod pod-72cfc7cd-683f-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:07:14.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cx5q7" for this suite. Mar 17 11:07:20.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:07:20.857: INFO: namespace: e2e-tests-emptydir-cx5q7, resource: bindings, ignored listing per whitelist Mar 17 11:07:20.861: INFO: namespace e2e-tests-emptydir-cx5q7 deletion completed in 6.094716634s • [SLOW TEST:10.270 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:07:20.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-v2zhb STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v2zhb to expose endpoints map[] Mar 17 11:07:21.028: INFO: Get endpoints failed (2.438537ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 17 11:07:22.032: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v2zhb exposes endpoints map[] (1.006272212s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-v2zhb STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v2zhb to expose endpoints map[pod1:[80]] Mar 17 11:07:25.090: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v2zhb exposes endpoints map[pod1:[80]] (3.051142581s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-v2zhb STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v2zhb to expose endpoints map[pod1:[80] pod2:[80]] Mar 17 11:07:28.158: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v2zhb exposes endpoints map[pod1:[80] pod2:[80]] (3.064681396s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-v2zhb STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v2zhb to expose endpoints map[pod2:[80]] Mar 17 11:07:29.183: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v2zhb exposes endpoints map[pod2:[80]] (1.019877947s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-v2zhb STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v2zhb to expose endpoints map[] Mar 17 11:07:30.222: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v2zhb exposes endpoints map[] (1.033686438s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:07:30.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-v2zhb" for this suite. Mar 17 11:07:52.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:07:52.379: INFO: namespace: e2e-tests-services-v2zhb, resource: bindings, ignored listing per whitelist Mar 17 11:07:52.381: INFO: namespace e2e-tests-services-v2zhb deletion completed in 22.125212423s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:31.520 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:07:52.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 17 11:07:52.530: INFO: Waiting up to 5m0s for pod "pod-8bc2c922-683f-11ea-b08f-0242ac11000f" in namespace "e2e-tests-emptydir-c7r9x" to be "success or failure" Mar 17 11:07:52.546: INFO: Pod "pod-8bc2c922-683f-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.574789ms Mar 17 11:07:54.553: INFO: Pod "pod-8bc2c922-683f-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023555921s Mar 17 11:07:56.557: INFO: Pod "pod-8bc2c922-683f-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027787219s STEP: Saw pod success Mar 17 11:07:56.557: INFO: Pod "pod-8bc2c922-683f-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:07:56.561: INFO: Trying to get logs from node hunter-worker pod pod-8bc2c922-683f-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 11:07:56.614: INFO: Waiting for pod pod-8bc2c922-683f-11ea-b08f-0242ac11000f to disappear Mar 17 11:07:56.648: INFO: Pod pod-8bc2c922-683f-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:07:56.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-c7r9x" for this suite. Mar 17 11:08:02.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:08:02.733: INFO: namespace: e2e-tests-emptydir-c7r9x, resource: bindings, ignored listing per whitelist Mar 17 11:08:02.741: INFO: namespace e2e-tests-emptydir-c7r9x deletion completed in 6.089094376s • [SLOW TEST:10.360 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:08:02.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:08:02.830: INFO: Creating deployment "nginx-deployment" Mar 17 11:08:02.843: INFO: Waiting for observed generation 1 Mar 17 11:08:05.200: INFO: Waiting for all required pods to come up Mar 17 11:08:05.205: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 17 11:08:13.214: INFO: Waiting for deployment "nginx-deployment" to complete Mar 17 11:08:13.220: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 17 11:08:13.227: INFO: Updating deployment nginx-deployment Mar 17 11:08:13.227: INFO: Waiting for observed generation 2 Mar 17 11:08:15.238: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 17 11:08:15.241: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 17 11:08:15.244: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 17 11:08:15.251: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 17 11:08:15.251: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 17 11:08:15.252: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 17 11:08:15.256: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 17 11:08:15.256: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 17 11:08:15.261: INFO: Updating deployment nginx-deployment Mar 17 11:08:15.261: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 17 11:08:15.462: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 17 11:08:15.540: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 17 11:08:15.825: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-d78wq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d78wq/deployments/nginx-deployment,UID:91e76f29-683f-11ea-99e8-0242ac110002,ResourceVersion:313864,Generation:3,CreationTimestamp:2020-03-17 11:08:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-03-17 11:08:13 +0000 UTC 2020-03-17 11:08:02 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-03-17 11:08:15 +0000 UTC 2020-03-17 11:08:15 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 17 11:08:15.956: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-d78wq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d78wq/replicasets/nginx-deployment-5c98f8fb5,UID:981af65c-683f-11ea-99e8-0242ac110002,ResourceVersion:313911,Generation:3,CreationTimestamp:2020-03-17 11:08:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 91e76f29-683f-11ea-99e8-0242ac110002 0xc001e90887 0xc001e90888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 17 11:08:15.956: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 17 11:08:15.956: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-d78wq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d78wq/replicasets/nginx-deployment-85ddf47c5d,UID:91ea7d12-683f-11ea-99e8-0242ac110002,ResourceVersion:313912,Generation:3,CreationTimestamp:2020-03-17 11:08:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 91e76f29-683f-11ea-99e8-0242ac110002 0xc001e90947 0xc001e90948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 17 11:08:15.996: INFO: Pod "nginx-deployment-5c98f8fb5-429vz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-429vz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-5c98f8fb5-429vz,UID:98202d73-683f-11ea-99e8-0242ac110002,ResourceVersion:313816,Generation:0,CreationTimestamp:2020-03-17 11:08:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 981af65c-683f-11ea-99e8-0242ac110002 0xc001f807b7 0xc001f807b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f809e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f80a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-17 11:08:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.996: INFO: Pod "nginx-deployment-5c98f8fb5-7nxhb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7nxhb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-5c98f8fb5-7nxhb,UID:9986c027-683f-11ea-99e8-0242ac110002,ResourceVersion:313895,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 981af65c-683f-11ea-99e8-0242ac110002 0xc001f80ac0 0xc001f80ac1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f80b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f80b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.996: INFO: Pod "nginx-deployment-5c98f8fb5-b7q5p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b7q5p,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-5c98f8fb5-b7q5p,UID:999d0972-683f-11ea-99e8-0242ac110002,ResourceVersion:313910,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 981af65c-683f-11ea-99e8-0242ac110002 0xc001f80c00 0xc001f80c01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f80c80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f80ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.996: INFO: Pod "nginx-deployment-5c98f8fb5-b7sxj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b7sxj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-5c98f8fb5-b7sxj,UID:998704e8-683f-11ea-99e8-0242ac110002,ResourceVersion:313905,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 981af65c-683f-11ea-99e8-0242ac110002 0xc001f80db0 0xc001f80db1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f80e30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f80e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.997: INFO: Pod "nginx-deployment-5c98f8fb5-bff5r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bff5r,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-5c98f8fb5-bff5r,UID:997b417d-683f-11ea-99e8-0242ac110002,ResourceVersion:313890,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 981af65c-683f-11ea-99e8-0242ac110002 0xc001f80f40 0xc001f80f41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f80fc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f80fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.997: INFO: Pod "nginx-deployment-5c98f8fb5-c8rwk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-c8rwk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-5c98f8fb5-c8rwk,UID:9986c388-683f-11ea-99e8-0242ac110002,ResourceVersion:313894,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 981af65c-683f-11ea-99e8-0242ac110002 0xc001f81050 0xc001f81051}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f811b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f811d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.997: INFO: Pod "nginx-deployment-5c98f8fb5-m7hwj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-m7hwj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-5c98f8fb5-m7hwj,UID:997b37a3-683f-11ea-99e8-0242ac110002,ResourceVersion:313876,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 981af65c-683f-11ea-99e8-0242ac110002 0xc001f81240 0xc001f81241}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f812c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f812e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.997: INFO: Pod "nginx-deployment-5c98f8fb5-nslnx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nslnx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-5c98f8fb5-nslnx,UID:996eded8-683f-11ea-99e8-0242ac110002,ResourceVersion:313863,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 981af65c-683f-11ea-99e8-0242ac110002 0xc001f81350 0xc001f81351}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f81430} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f81450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.997: INFO: Pod "nginx-deployment-5c98f8fb5-sd9dv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sd9dv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-5c98f8fb5-sd9dv,UID:98223d67-683f-11ea-99e8-0242ac110002,ResourceVersion:313823,Generation:0,CreationTimestamp:2020-03-17 11:08:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 981af65c-683f-11ea-99e8-0242ac110002 0xc001f814c0 0xc001f814c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f81540} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f81560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-17 11:08:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.997: INFO: Pod "nginx-deployment-5c98f8fb5-v7vn6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-v7vn6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-5c98f8fb5-v7vn6,UID:983ade7e-683f-11ea-99e8-0242ac110002,ResourceVersion:313849,Generation:0,CreationTimestamp:2020-03-17 11:08:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 981af65c-683f-11ea-99e8-0242ac110002 0xc001f81620 0xc001f81621}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f816a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f816c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-17 11:08:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.998: INFO: Pod "nginx-deployment-5c98f8fb5-w78qm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-w78qm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-5c98f8fb5-w78qm,UID:98411ca0-683f-11ea-99e8-0242ac110002,ResourceVersion:313836,Generation:0,CreationTimestamp:2020-03-17 11:08:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 981af65c-683f-11ea-99e8-0242ac110002 0xc001f81780 0xc001f81781}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f81800} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f81820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-17 11:08:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.998: INFO: Pod "nginx-deployment-5c98f8fb5-zjs9k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zjs9k,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-5c98f8fb5-zjs9k,UID:9986f8c7-683f-11ea-99e8-0242ac110002,ResourceVersion:313903,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 981af65c-683f-11ea-99e8-0242ac110002 0xc001f818e0 0xc001f818e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f81960} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f81980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.998: INFO: Pod "nginx-deployment-5c98f8fb5-zkbvh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zkbvh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-5c98f8fb5-zkbvh,UID:98223880-683f-11ea-99e8-0242ac110002,ResourceVersion:313842,Generation:0,CreationTimestamp:2020-03-17 11:08:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 981af65c-683f-11ea-99e8-0242ac110002 0xc001f819f0 0xc001f819f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f81a70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f81a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-17 11:08:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.998: INFO: Pod "nginx-deployment-85ddf47c5d-2cn5x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2cn5x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-2cn5x,UID:9986e352-683f-11ea-99e8-0242ac110002,ResourceVersion:313899,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001f81b50 0xc001f81b51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f81bc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f81be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.998: INFO: Pod "nginx-deployment-85ddf47c5d-2mqq4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2mqq4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-2mqq4,UID:91f6709f-683f-11ea-99e8-0242ac110002,ResourceVersion:313763,Generation:0,CreationTimestamp:2020-03-17 11:08:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001f81c50 0xc001f81c51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f81cc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f81ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.84,StartTime:2020-03-17 11:08:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-17 11:08:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://648dda4ae0d9324d17c3c68fb219ede30f6ebf86b55e846eb7339b52392d4f05}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.998: INFO: Pod "nginx-deployment-85ddf47c5d-586p2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-586p2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-586p2,UID:9986e565-683f-11ea-99e8-0242ac110002,ResourceVersion:313896,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001f81da0 0xc001f81da1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f81e10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f81e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.998: INFO: Pod "nginx-deployment-85ddf47c5d-74mgj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-74mgj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-74mgj,UID:91fa0afa-683f-11ea-99e8-0242ac110002,ResourceVersion:313782,Generation:0,CreationTimestamp:2020-03-17 11:08:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001f81f20 0xc001f81f21}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f81f90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f81fb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.25,StartTime:2020-03-17 11:08:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-17 11:08:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5f91528947bd40359a615ffae32e57c0667853328030a99c496d9bf3e686dbfb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.999: INFO: Pod "nginx-deployment-85ddf47c5d-9dmdw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9dmdw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-9dmdw,UID:91f662d8-683f-11ea-99e8-0242ac110002,ResourceVersion:313764,Generation:0,CreationTimestamp:2020-03-17 11:08:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb0110 0xc001eb0111}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb0180} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb01a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.24,StartTime:2020-03-17 11:08:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-17 11:08:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://57558119baccb5003df2c81823cb45d7b5bc0419d69be3432da67299e61c30cc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.999: INFO: Pod "nginx-deployment-85ddf47c5d-cbtqm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cbtqm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-cbtqm,UID:997b52b9-683f-11ea-99e8-0242ac110002,ResourceVersion:313886,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb0260 0xc001eb0261}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb03b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb03d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.999: INFO: Pod "nginx-deployment-85ddf47c5d-d7f2v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-d7f2v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-d7f2v,UID:996ef5d1-683f-11ea-99e8-0242ac110002,ResourceVersion:313909,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb0440 0xc001eb0441}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb04b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb04d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-17 11:08:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.999: INFO: Pod "nginx-deployment-85ddf47c5d-dtkr2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dtkr2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-dtkr2,UID:9986ed38-683f-11ea-99e8-0242ac110002,ResourceVersion:313904,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb0640 0xc001eb0641}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb06b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb06d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.999: INFO: Pod "nginx-deployment-85ddf47c5d-g9f98" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g9f98,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-g9f98,UID:91f54fea-683f-11ea-99e8-0242ac110002,ResourceVersion:313771,Generation:0,CreationTimestamp:2020-03-17 11:08:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb0740 0xc001eb0741}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb07b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb07d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.83,StartTime:2020-03-17 11:08:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-17 11:08:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://63385a7297bdc374d1d4ffa9e5f0ff086a4f70d1160ca47cbbe610fad6f50c90}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.999: INFO: Pod "nginx-deployment-85ddf47c5d-gqvf5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gqvf5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-gqvf5,UID:997b55e9-683f-11ea-99e8-0242ac110002,ResourceVersion:313877,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb0890 0xc001eb0891}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb0900} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb0920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:15.999: INFO: Pod "nginx-deployment-85ddf47c5d-j8n78" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-j8n78,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-j8n78,UID:91f43c72-683f-11ea-99e8-0242ac110002,ResourceVersion:313729,Generation:0,CreationTimestamp:2020-03-17 11:08:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb0990 0xc001eb0991}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb0a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb0a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.21,StartTime:2020-03-17 11:08:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-17 11:08:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b55387333ef2d017df79d4db6c511163486b128d7109ef2e6e3dc922e9f1919b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:16.000: INFO: Pod "nginx-deployment-85ddf47c5d-j95ck" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-j95ck,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-j95ck,UID:997b63cf-683f-11ea-99e8-0242ac110002,ResourceVersion:313878,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb0ae0 0xc001eb0ae1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb0b50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb0b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:16.000: INFO: Pod "nginx-deployment-85ddf47c5d-jmvr6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jmvr6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-jmvr6,UID:99579909-683f-11ea-99e8-0242ac110002,ResourceVersion:313906,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb0be0 0xc001eb0be1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb0c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb0c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-17 11:08:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:16.000: INFO: Pod "nginx-deployment-85ddf47c5d-jr2cf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jr2cf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-jr2cf,UID:9986fde0-683f-11ea-99e8-0242ac110002,ResourceVersion:313898,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb0d20 0xc001eb0d21}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb0d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb0db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:16.000: INFO: Pod "nginx-deployment-85ddf47c5d-n7d8m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n7d8m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-n7d8m,UID:91f558f6-683f-11ea-99e8-0242ac110002,ResourceVersion:313745,Generation:0,CreationTimestamp:2020-03-17 11:08:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb0e20 0xc001eb0e21}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb0e90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb0eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.22,StartTime:2020-03-17 11:08:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-17 11:08:07 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e6f5dae0e3373db46c14f7e5841bcab4a1ddfadeaa6893f74c9387b6207c94af}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:16.000: INFO: Pod "nginx-deployment-85ddf47c5d-sm2jl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sm2jl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-sm2jl,UID:9986eb6a-683f-11ea-99e8-0242ac110002,ResourceVersion:313900,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb0f70 0xc001eb0f71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb0fe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb1000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:16.000: INFO: Pod "nginx-deployment-85ddf47c5d-t8xj8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t8xj8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-t8xj8,UID:91f65f94-683f-11ea-99e8-0242ac110002,ResourceVersion:313768,Generation:0,CreationTimestamp:2020-03-17 11:08:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb1070 0xc001eb1071}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb10e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb1100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.23,StartTime:2020-03-17 11:08:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-17 11:08:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e3d5a7cc46d0b00c8e8202fc8a28d04fa34fb43d5c09643c221d7f7eda225c77}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:16.000: INFO: Pod "nginx-deployment-85ddf47c5d-tvktv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tvktv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-tvktv,UID:997b6387-683f-11ea-99e8-0242ac110002,ResourceVersion:313881,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb11c0 0xc001eb11c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb1230} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb1250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:16.000: INFO: Pod "nginx-deployment-85ddf47c5d-wcglv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wcglv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-wcglv,UID:91f64073-683f-11ea-99e8-0242ac110002,ResourceVersion:313776,Generation:0,CreationTimestamp:2020-03-17 11:08:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb12d0 0xc001eb12d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb1340} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb1360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.85,StartTime:2020-03-17 11:08:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-17 11:08:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3a3a4e1f7b8999acdb50dbad315bef22741b596b29969396e5e6409db18504c5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:08:16.000: INFO: Pod "nginx-deployment-85ddf47c5d-wwg8c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wwg8c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d78wq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d78wq/pods/nginx-deployment-85ddf47c5d-wwg8c,UID:996ef904-683f-11ea-99e8-0242ac110002,ResourceVersion:313871,Generation:0,CreationTimestamp:2020-03-17 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91ea7d12-683f-11ea-99e8-0242ac110002 0xc001eb1420 0xc001eb1421}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpxfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpxfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpxfm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb1490} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb14b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:08:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:08:16.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-d78wq" for this suite. Mar 17 11:08:32.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:08:32.230: INFO: namespace: e2e-tests-deployment-d78wq, resource: bindings, ignored listing per whitelist Mar 17 11:08:32.234: INFO: namespace e2e-tests-deployment-d78wq deletion completed in 16.178931994s • [SLOW TEST:29.493 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:08:32.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 11:08:32.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a38f71a9-683f-11ea-b08f-0242ac11000f" in namespace "e2e-tests-downward-api-jshzl" to be "success or failure" Mar 17 11:08:32.728: INFO: Pod "downwardapi-volume-a38f71a9-683f-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 211.493438ms Mar 17 11:08:34.731: INFO: Pod "downwardapi-volume-a38f71a9-683f-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214945295s Mar 17 11:08:36.735: INFO: Pod "downwardapi-volume-a38f71a9-683f-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218433941s Mar 17 11:08:38.737: INFO: Pod "downwardapi-volume-a38f71a9-683f-11ea-b08f-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 6.22108327s Mar 17 11:08:40.741: INFO: Pod "downwardapi-volume-a38f71a9-683f-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.225209706s STEP: Saw pod success Mar 17 11:08:40.741: INFO: Pod "downwardapi-volume-a38f71a9-683f-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:08:40.745: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-a38f71a9-683f-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 11:08:40.790: INFO: Waiting for pod downwardapi-volume-a38f71a9-683f-11ea-b08f-0242ac11000f to disappear Mar 17 11:08:40.803: INFO: Pod downwardapi-volume-a38f71a9-683f-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:08:40.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jshzl" for this suite. Mar 17 11:08:46.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:08:46.890: INFO: namespace: e2e-tests-downward-api-jshzl, resource: bindings, ignored listing per whitelist Mar 17 11:08:46.920: INFO: namespace e2e-tests-downward-api-jshzl deletion completed in 6.114076984s • [SLOW TEST:14.686 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:08:46.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 17 11:08:47.059: INFO: Waiting up to 5m0s for pod "pod-ac425dc2-683f-11ea-b08f-0242ac11000f" in namespace "e2e-tests-emptydir-pwrvx" to be "success or failure" Mar 17 11:08:47.064: INFO: Pod "pod-ac425dc2-683f-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.276022ms Mar 17 11:08:49.100: INFO: Pod "pod-ac425dc2-683f-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040530172s Mar 17 11:08:51.104: INFO: Pod "pod-ac425dc2-683f-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044465103s STEP: Saw pod success Mar 17 11:08:51.104: INFO: Pod "pod-ac425dc2-683f-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:08:51.107: INFO: Trying to get logs from node hunter-worker pod pod-ac425dc2-683f-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 11:08:51.147: INFO: Waiting for pod pod-ac425dc2-683f-11ea-b08f-0242ac11000f to disappear Mar 17 11:08:51.154: INFO: Pod pod-ac425dc2-683f-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:08:51.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-pwrvx" for this suite. Mar 17 11:08:57.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:08:57.233: INFO: namespace: e2e-tests-emptydir-pwrvx, resource: bindings, ignored listing per whitelist Mar 17 11:08:57.268: INFO: namespace e2e-tests-emptydir-pwrvx deletion completed in 6.095181199s • [SLOW TEST:10.348 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:08:57.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Mar 17 11:08:57.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-qg2sz run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 17 11:09:02.699: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0317 11:09:02.634998 500 log.go:172] (0xc000a182c0) (0xc0005ffa40) Create stream\nI0317 11:09:02.635030 500 log.go:172] (0xc000a182c0) (0xc0005ffa40) Stream added, broadcasting: 1\nI0317 11:09:02.637008 500 log.go:172] (0xc000a182c0) Reply frame received for 1\nI0317 11:09:02.637069 500 log.go:172] (0xc000a182c0) (0xc0005ffae0) Create stream\nI0317 11:09:02.637090 500 log.go:172] (0xc000a182c0) (0xc0005ffae0) Stream added, broadcasting: 3\nI0317 11:09:02.638216 500 log.go:172] (0xc000a182c0) Reply frame received for 3\nI0317 11:09:02.638271 500 log.go:172] (0xc000a182c0) (0xc0007e6000) Create stream\nI0317 11:09:02.638287 500 log.go:172] (0xc000a182c0) (0xc0007e6000) Stream added, broadcasting: 5\nI0317 11:09:02.639074 500 log.go:172] (0xc000a182c0) Reply frame received for 5\nI0317 11:09:02.639115 500 log.go:172] (0xc000a182c0) (0xc000530000) Create stream\nI0317 11:09:02.639127 500 log.go:172] (0xc000a182c0) (0xc000530000) Stream added, broadcasting: 7\nI0317 11:09:02.640058 500 log.go:172] (0xc000a182c0) Reply frame received for 7\nI0317 11:09:02.640195 500 log.go:172] (0xc0005ffae0) (3) Writing data frame\nI0317 11:09:02.640295 500 log.go:172] (0xc0005ffae0) (3) Writing data frame\nI0317 11:09:02.641038 500 log.go:172] (0xc000a182c0) Data frame received for 5\nI0317 11:09:02.641064 500 log.go:172] (0xc0007e6000) (5) Data frame handling\nI0317 11:09:02.641102 500 log.go:172] (0xc0007e6000) (5) Data frame sent\nI0317 11:09:02.641674 500 log.go:172] (0xc000a182c0) Data frame received for 5\nI0317 11:09:02.641690 500 log.go:172] (0xc0007e6000) (5) Data frame handling\nI0317 11:09:02.641702 500 log.go:172] (0xc0007e6000) (5) Data frame sent\nI0317 11:09:02.675205 500 log.go:172] (0xc000a182c0) Data frame received for 5\nI0317 11:09:02.675238 500 log.go:172] (0xc0007e6000) (5) Data frame handling\nI0317 11:09:02.675461 500 log.go:172] (0xc000a182c0) Data frame received for 7\nI0317 11:09:02.675489 500 log.go:172] (0xc000530000) (7) Data frame handling\nI0317 11:09:02.676132 500 log.go:172] (0xc000a182c0) Data frame received for 1\nI0317 11:09:02.676167 500 log.go:172] (0xc0005ffa40) (1) Data frame handling\nI0317 11:09:02.676182 500 log.go:172] (0xc0005ffa40) (1) Data frame sent\nI0317 11:09:02.676199 500 log.go:172] (0xc000a182c0) (0xc0005ffae0) Stream removed, broadcasting: 3\nI0317 11:09:02.676244 500 log.go:172] (0xc000a182c0) (0xc0005ffa40) Stream removed, broadcasting: 1\nI0317 11:09:02.676298 500 log.go:172] (0xc000a182c0) Go away received\nI0317 11:09:02.676397 500 log.go:172] (0xc000a182c0) (0xc0005ffa40) Stream removed, broadcasting: 1\nI0317 11:09:02.676432 500 log.go:172] (0xc000a182c0) (0xc0005ffae0) Stream removed, broadcasting: 3\nI0317 11:09:02.676450 500 log.go:172] (0xc000a182c0) (0xc0007e6000) Stream removed, broadcasting: 5\nI0317 11:09:02.676476 500 log.go:172] (0xc000a182c0) (0xc000530000) Stream removed, broadcasting: 7\n" Mar 17 11:09:02.699: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:09:04.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qg2sz" for this suite. Mar 17 11:09:12.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:09:12.801: INFO: namespace: e2e-tests-kubectl-qg2sz, resource: bindings, ignored listing per whitelist Mar 17 11:09:12.818: INFO: namespace e2e-tests-kubectl-qg2sz deletion completed in 8.108457029s • [SLOW TEST:15.549 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:09:12.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 17 11:09:21.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 11:09:21.029: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 11:09:23.029: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 11:09:23.033: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 11:09:25.029: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 11:09:25.034: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 11:09:27.029: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 11:09:27.034: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 11:09:29.029: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 11:09:29.034: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 11:09:31.029: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 11:09:31.033: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 11:09:33.029: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 11:09:33.033: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 11:09:35.029: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 11:09:35.034: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 11:09:37.029: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 11:09:37.071: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 11:09:39.029: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 11:09:39.033: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 11:09:41.029: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 11:09:41.033: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:09:41.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mpk8l" for this suite. Mar 17 11:10:03.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:10:03.083: INFO: namespace: e2e-tests-container-lifecycle-hook-mpk8l, resource: bindings, ignored listing per whitelist Mar 17 11:10:03.126: INFO: namespace e2e-tests-container-lifecycle-hook-mpk8l deletion completed in 22.088382812s • [SLOW TEST:50.307 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:10:03.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:11:03.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-hx7f9" for this suite. Mar 17 11:11:25.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:11:25.268: INFO: namespace: e2e-tests-container-probe-hx7f9, resource: bindings, ignored listing per whitelist Mar 17 11:11:25.328: INFO: namespace e2e-tests-container-probe-hx7f9 deletion completed in 22.085257877s • [SLOW TEST:82.202 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:11:25.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Mar 17 11:11:25.450: INFO: Waiting up to 5m0s for pod "client-containers-0aa944f1-6840-11ea-b08f-0242ac11000f" in namespace "e2e-tests-containers-7xg66" to be "success or failure" Mar 17 11:11:25.460: INFO: Pod "client-containers-0aa944f1-6840-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.834963ms Mar 17 11:11:27.463: INFO: Pod "client-containers-0aa944f1-6840-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013380596s Mar 17 11:11:29.467: INFO: Pod "client-containers-0aa944f1-6840-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017458564s STEP: Saw pod success Mar 17 11:11:29.467: INFO: Pod "client-containers-0aa944f1-6840-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:11:29.470: INFO: Trying to get logs from node hunter-worker pod client-containers-0aa944f1-6840-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 11:11:29.533: INFO: Waiting for pod client-containers-0aa944f1-6840-11ea-b08f-0242ac11000f to disappear Mar 17 11:11:29.543: INFO: Pod client-containers-0aa944f1-6840-11ea-b08f-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:11:29.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-7xg66" for this suite. Mar 17 11:11:35.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:11:35.584: INFO: namespace: e2e-tests-containers-7xg66, resource: bindings, ignored listing per whitelist Mar 17 11:11:35.634: INFO: namespace e2e-tests-containers-7xg66 deletion completed in 6.087220982s • [SLOW TEST:10.306 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:11:35.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 17 11:11:40.264: INFO: Successfully updated pod "pod-update-10cdd51e-6840-11ea-b08f-0242ac11000f" STEP: verifying the updated pod is in kubernetes Mar 17 11:11:40.273: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:11:40.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-nk992" for this suite. Mar 17 11:12:02.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:12:02.303: INFO: namespace: e2e-tests-pods-nk992, resource: bindings, ignored listing per whitelist Mar 17 11:12:02.365: INFO: namespace e2e-tests-pods-nk992 deletion completed in 22.08832595s • [SLOW TEST:26.732 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:12:02.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-20bb5879-6840-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume secrets Mar 17 11:12:02.474: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-20bcdcc1-6840-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-mq8cb" to be "success or failure" Mar 17 11:12:02.478: INFO: Pod "pod-projected-secrets-20bcdcc1-6840-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350515ms Mar 17 11:12:04.496: INFO: Pod "pod-projected-secrets-20bcdcc1-6840-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022716126s Mar 17 11:12:06.501: INFO: Pod "pod-projected-secrets-20bcdcc1-6840-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027052518s STEP: Saw pod success Mar 17 11:12:06.501: INFO: Pod "pod-projected-secrets-20bcdcc1-6840-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:12:06.504: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-20bcdcc1-6840-11ea-b08f-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 17 11:12:06.528: INFO: Waiting for pod pod-projected-secrets-20bcdcc1-6840-11ea-b08f-0242ac11000f to disappear Mar 17 11:12:06.547: INFO: Pod pod-projected-secrets-20bcdcc1-6840-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:12:06.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mq8cb" for this suite. Mar 17 11:12:12.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:12:12.629: INFO: namespace: e2e-tests-projected-mq8cb, resource: bindings, ignored listing per whitelist Mar 17 11:12:12.646: INFO: namespace e2e-tests-projected-mq8cb deletion completed in 6.095564976s • [SLOW TEST:10.280 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:12:12.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-26dd5404-6840-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 11:12:12.765: INFO: Waiting up to 5m0s for pod "pod-configmaps-26df7abf-6840-11ea-b08f-0242ac11000f" in namespace "e2e-tests-configmap-9fbvj" to be "success or failure" Mar 17 11:12:12.792: INFO: Pod "pod-configmaps-26df7abf-6840-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.116109ms Mar 17 11:12:14.795: INFO: Pod "pod-configmaps-26df7abf-6840-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029782831s Mar 17 11:12:16.808: INFO: Pod "pod-configmaps-26df7abf-6840-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042618695s STEP: Saw pod success Mar 17 11:12:16.808: INFO: Pod "pod-configmaps-26df7abf-6840-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:12:16.810: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-26df7abf-6840-11ea-b08f-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 17 11:12:16.840: INFO: Waiting for pod pod-configmaps-26df7abf-6840-11ea-b08f-0242ac11000f to disappear Mar 17 11:12:16.852: INFO: Pod pod-configmaps-26df7abf-6840-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:12:16.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9fbvj" for this suite. Mar 17 11:12:22.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:12:22.876: INFO: namespace: e2e-tests-configmap-9fbvj, resource: bindings, ignored listing per whitelist Mar 17 11:12:22.948: INFO: namespace e2e-tests-configmap-9fbvj deletion completed in 6.092171177s • [SLOW TEST:10.302 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:12:22.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-rgkg STEP: Creating a pod to test atomic-volume-subpath Mar 17 11:12:23.110: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rgkg" in namespace "e2e-tests-subpath-j6q58" to be "success or failure" Mar 17 11:12:23.120: INFO: Pod "pod-subpath-test-projected-rgkg": Phase="Pending", Reason="", readiness=false. Elapsed: 9.393603ms Mar 17 11:12:25.124: INFO: Pod "pod-subpath-test-projected-rgkg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013377883s Mar 17 11:12:27.179: INFO: Pod "pod-subpath-test-projected-rgkg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069141557s Mar 17 11:12:29.183: INFO: Pod "pod-subpath-test-projected-rgkg": Phase="Running", Reason="", readiness=false. Elapsed: 6.073087297s Mar 17 11:12:31.187: INFO: Pod "pod-subpath-test-projected-rgkg": Phase="Running", Reason="", readiness=false. Elapsed: 8.077272371s Mar 17 11:12:33.192: INFO: Pod "pod-subpath-test-projected-rgkg": Phase="Running", Reason="", readiness=false. Elapsed: 10.081467963s Mar 17 11:12:35.195: INFO: Pod "pod-subpath-test-projected-rgkg": Phase="Running", Reason="", readiness=false. Elapsed: 12.084563488s Mar 17 11:12:37.199: INFO: Pod "pod-subpath-test-projected-rgkg": Phase="Running", Reason="", readiness=false. Elapsed: 14.088874712s Mar 17 11:12:39.203: INFO: Pod "pod-subpath-test-projected-rgkg": Phase="Running", Reason="", readiness=false. Elapsed: 16.093073248s Mar 17 11:12:41.207: INFO: Pod "pod-subpath-test-projected-rgkg": Phase="Running", Reason="", readiness=false. Elapsed: 18.096678954s Mar 17 11:12:43.211: INFO: Pod "pod-subpath-test-projected-rgkg": Phase="Running", Reason="", readiness=false. Elapsed: 20.100930682s Mar 17 11:12:45.215: INFO: Pod "pod-subpath-test-projected-rgkg": Phase="Running", Reason="", readiness=false. Elapsed: 22.104868136s Mar 17 11:12:47.219: INFO: Pod "pod-subpath-test-projected-rgkg": Phase="Running", Reason="", readiness=false. Elapsed: 24.109012339s Mar 17 11:12:49.223: INFO: Pod "pod-subpath-test-projected-rgkg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.113263044s STEP: Saw pod success Mar 17 11:12:49.223: INFO: Pod "pod-subpath-test-projected-rgkg" satisfied condition "success or failure" Mar 17 11:12:49.226: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-rgkg container test-container-subpath-projected-rgkg: STEP: delete the pod Mar 17 11:12:49.255: INFO: Waiting for pod pod-subpath-test-projected-rgkg to disappear Mar 17 11:12:49.265: INFO: Pod pod-subpath-test-projected-rgkg no longer exists STEP: Deleting pod pod-subpath-test-projected-rgkg Mar 17 11:12:49.265: INFO: Deleting pod "pod-subpath-test-projected-rgkg" in namespace "e2e-tests-subpath-j6q58" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:12:49.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-j6q58" for this suite. Mar 17 11:12:55.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:12:55.359: INFO: namespace: e2e-tests-subpath-j6q58, resource: bindings, ignored listing per whitelist Mar 17 11:12:55.396: INFO: namespace e2e-tests-subpath-j6q58 deletion completed in 6.124062737s • [SLOW TEST:32.448 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:12:55.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-djsh STEP: Creating a pod to test atomic-volume-subpath Mar 17 11:12:55.553: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-djsh" in namespace "e2e-tests-subpath-rqdwn" to be "success or failure" Mar 17 11:12:55.560: INFO: Pod "pod-subpath-test-configmap-djsh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.362679ms Mar 17 11:12:57.563: INFO: Pod "pod-subpath-test-configmap-djsh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009937517s Mar 17 11:12:59.665: INFO: Pod "pod-subpath-test-configmap-djsh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111755278s Mar 17 11:13:01.671: INFO: Pod "pod-subpath-test-configmap-djsh": Phase="Running", Reason="", readiness=false. Elapsed: 6.117419738s Mar 17 11:13:03.675: INFO: Pod "pod-subpath-test-configmap-djsh": Phase="Running", Reason="", readiness=false. Elapsed: 8.122082259s Mar 17 11:13:05.679: INFO: Pod "pod-subpath-test-configmap-djsh": Phase="Running", Reason="", readiness=false. Elapsed: 10.126162927s Mar 17 11:13:07.683: INFO: Pod "pod-subpath-test-configmap-djsh": Phase="Running", Reason="", readiness=false. Elapsed: 12.130079347s Mar 17 11:13:09.688: INFO: Pod "pod-subpath-test-configmap-djsh": Phase="Running", Reason="", readiness=false. Elapsed: 14.134228254s Mar 17 11:13:11.692: INFO: Pod "pod-subpath-test-configmap-djsh": Phase="Running", Reason="", readiness=false. Elapsed: 16.13863196s Mar 17 11:13:13.696: INFO: Pod "pod-subpath-test-configmap-djsh": Phase="Running", Reason="", readiness=false. Elapsed: 18.142938024s Mar 17 11:13:15.701: INFO: Pod "pod-subpath-test-configmap-djsh": Phase="Running", Reason="", readiness=false. Elapsed: 20.147579394s Mar 17 11:13:17.705: INFO: Pod "pod-subpath-test-configmap-djsh": Phase="Running", Reason="", readiness=false. Elapsed: 22.151602713s Mar 17 11:13:19.712: INFO: Pod "pod-subpath-test-configmap-djsh": Phase="Running", Reason="", readiness=false. Elapsed: 24.158870693s Mar 17 11:13:21.717: INFO: Pod "pod-subpath-test-configmap-djsh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.163698441s STEP: Saw pod success Mar 17 11:13:21.717: INFO: Pod "pod-subpath-test-configmap-djsh" satisfied condition "success or failure" Mar 17 11:13:21.720: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-djsh container test-container-subpath-configmap-djsh: STEP: delete the pod Mar 17 11:13:21.756: INFO: Waiting for pod pod-subpath-test-configmap-djsh to disappear Mar 17 11:13:21.774: INFO: Pod pod-subpath-test-configmap-djsh no longer exists STEP: Deleting pod pod-subpath-test-configmap-djsh Mar 17 11:13:21.774: INFO: Deleting pod "pod-subpath-test-configmap-djsh" in namespace "e2e-tests-subpath-rqdwn" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:13:21.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-rqdwn" for this suite. Mar 17 11:13:27.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:13:27.869: INFO: namespace: e2e-tests-subpath-rqdwn, resource: bindings, ignored listing per whitelist Mar 17 11:13:27.910: INFO: namespace e2e-tests-subpath-rqdwn deletion completed in 6.130148607s • [SLOW TEST:32.514 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:13:27.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-53c121ac-6840-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 11:13:28.084: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-53c1d12e-6840-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-rfh4k" to be "success or failure" Mar 17 11:13:28.098: INFO: Pod "pod-projected-configmaps-53c1d12e-6840-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.758263ms Mar 17 11:13:30.102: INFO: Pod "pod-projected-configmaps-53c1d12e-6840-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017263999s Mar 17 11:13:32.106: INFO: Pod "pod-projected-configmaps-53c1d12e-6840-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021242288s STEP: Saw pod success Mar 17 11:13:32.106: INFO: Pod "pod-projected-configmaps-53c1d12e-6840-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:13:32.109: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-53c1d12e-6840-11ea-b08f-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 17 11:13:32.124: INFO: Waiting for pod pod-projected-configmaps-53c1d12e-6840-11ea-b08f-0242ac11000f to disappear Mar 17 11:13:32.128: INFO: Pod pod-projected-configmaps-53c1d12e-6840-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:13:32.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rfh4k" for this suite. Mar 17 11:13:38.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:13:38.216: INFO: namespace: e2e-tests-projected-rfh4k, resource: bindings, ignored listing per whitelist Mar 17 11:13:38.282: INFO: namespace e2e-tests-projected-rfh4k deletion completed in 6.150355514s • [SLOW TEST:10.372 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:13:38.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:13:38.434: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 17 11:13:38.444: INFO: Number of nodes with available pods: 0 Mar 17 11:13:38.444: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 17 11:13:38.491: INFO: Number of nodes with available pods: 0 Mar 17 11:13:38.491: INFO: Node hunter-worker is running more than one daemon pod Mar 17 11:13:39.494: INFO: Number of nodes with available pods: 0 Mar 17 11:13:39.494: INFO: Node hunter-worker is running more than one daemon pod Mar 17 11:13:40.495: INFO: Number of nodes with available pods: 0 Mar 17 11:13:40.495: INFO: Node hunter-worker is running more than one daemon pod Mar 17 11:13:41.496: INFO: Number of nodes with available pods: 1 Mar 17 11:13:41.496: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 17 11:13:41.523: INFO: Number of nodes with available pods: 1 Mar 17 11:13:41.523: INFO: Number of running nodes: 0, number of available pods: 1 Mar 17 11:13:42.527: INFO: Number of nodes with available pods: 0 Mar 17 11:13:42.527: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 17 11:13:42.553: INFO: Number of nodes with available pods: 0 Mar 17 11:13:42.553: INFO: Node hunter-worker is running more than one daemon pod Mar 17 11:13:43.557: INFO: Number of nodes with available pods: 0 Mar 17 11:13:43.558: INFO: Node hunter-worker is running more than one daemon pod Mar 17 11:13:44.558: INFO: Number of nodes with available pods: 0 Mar 17 11:13:44.558: INFO: Node hunter-worker is running more than one daemon pod Mar 17 11:13:45.558: INFO: Number of nodes with available pods: 0 Mar 17 11:13:45.558: INFO: Node hunter-worker is running more than one daemon pod Mar 17 11:13:46.557: INFO: Number of nodes with available pods: 0 Mar 17 11:13:46.557: INFO: Node hunter-worker is running more than one daemon pod Mar 17 11:13:47.557: INFO: Number of nodes with available pods: 0 Mar 17 11:13:47.557: INFO: Node hunter-worker is running more than one daemon pod Mar 17 11:13:48.558: INFO: Number of nodes with available pods: 0 Mar 17 11:13:48.558: INFO: Node hunter-worker is running more than one daemon pod Mar 17 11:13:49.557: INFO: Number of nodes with available pods: 0 Mar 17 11:13:49.557: INFO: Node hunter-worker is running more than one daemon pod Mar 17 11:13:50.557: INFO: Number of nodes with available pods: 0 Mar 17 11:13:50.557: INFO: Node hunter-worker is running more than one daemon pod Mar 17 11:13:51.557: INFO: Number of nodes with available pods: 0 Mar 17 11:13:51.557: INFO: Node hunter-worker is running more than one daemon pod Mar 17 11:13:52.557: INFO: Number of nodes with available pods: 0 Mar 17 11:13:52.557: INFO: Node hunter-worker is running more than one daemon pod Mar 17 11:13:53.557: INFO: Number of nodes with available pods: 0 Mar 17 11:13:53.558: INFO: Node hunter-worker is running more than one daemon pod Mar 17 11:13:54.557: INFO: Number of nodes with available pods: 1 Mar 17 11:13:54.557: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6lt8p, will wait for the garbage collector to delete the pods Mar 17 11:13:54.622: INFO: Deleting DaemonSet.extensions daemon-set took: 5.70212ms Mar 17 11:13:54.722: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.227945ms Mar 17 11:13:58.226: INFO: Number of nodes with available pods: 0 Mar 17 11:13:58.226: INFO: Number of running nodes: 0, number of available pods: 0 Mar 17 11:13:58.231: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6lt8p/daemonsets","resourceVersion":"315208"},"items":null} Mar 17 11:13:58.234: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6lt8p/pods","resourceVersion":"315208"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:13:58.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-6lt8p" for this suite. Mar 17 11:14:04.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:14:04.378: INFO: namespace: e2e-tests-daemonsets-6lt8p, resource: bindings, ignored listing per whitelist Mar 17 11:14:04.384: INFO: namespace e2e-tests-daemonsets-6lt8p deletion completed in 6.088644455s • [SLOW TEST:26.101 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:14:04.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 17 11:14:04.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-8g8hm' Mar 17 11:14:04.575: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 17 11:14:04.575: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Mar 17 11:14:06.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-8g8hm' Mar 17 11:14:06.844: INFO: stderr: "" Mar 17 11:14:06.844: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:14:06.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8g8hm" for this suite. Mar 17 11:16:08.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:16:08.970: INFO: namespace: e2e-tests-kubectl-8g8hm, resource: bindings, ignored listing per whitelist Mar 17 11:16:08.974: INFO: namespace e2e-tests-kubectl-8g8hm deletion completed in 2m2.125472305s • [SLOW TEST:124.590 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:16:08.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-kl4f6 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-kl4f6 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-kl4f6 Mar 17 11:16:09.071: INFO: Found 0 stateful pods, waiting for 1 Mar 17 11:16:19.075: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 17 11:16:19.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 17 11:16:19.324: INFO: stderr: "I0317 11:16:19.223584 571 log.go:172] (0xc000154840) (0xc0006c14a0) Create stream\nI0317 11:16:19.223649 571 log.go:172] (0xc000154840) (0xc0006c14a0) Stream added, broadcasting: 1\nI0317 11:16:19.225949 571 log.go:172] (0xc000154840) Reply frame received for 1\nI0317 11:16:19.226011 571 log.go:172] (0xc000154840) (0xc000236000) Create stream\nI0317 11:16:19.226042 571 log.go:172] (0xc000154840) (0xc000236000) Stream added, broadcasting: 3\nI0317 11:16:19.227069 571 log.go:172] (0xc000154840) Reply frame received for 3\nI0317 11:16:19.227118 571 log.go:172] (0xc000154840) (0xc0006c1540) Create stream\nI0317 11:16:19.227145 571 log.go:172] (0xc000154840) (0xc0006c1540) Stream added, broadcasting: 5\nI0317 11:16:19.227957 571 log.go:172] (0xc000154840) Reply frame received for 5\nI0317 11:16:19.319100 571 log.go:172] (0xc000154840) Data frame received for 3\nI0317 11:16:19.319128 571 log.go:172] (0xc000236000) (3) Data frame handling\nI0317 11:16:19.319143 571 log.go:172] (0xc000236000) (3) Data frame sent\nI0317 11:16:19.319321 571 log.go:172] (0xc000154840) Data frame received for 5\nI0317 11:16:19.319349 571 log.go:172] (0xc0006c1540) (5) Data frame handling\nI0317 11:16:19.319370 571 log.go:172] (0xc000154840) Data frame received for 3\nI0317 11:16:19.319378 571 log.go:172] (0xc000236000) (3) Data frame handling\nI0317 11:16:19.321757 571 log.go:172] (0xc000154840) Data frame received for 1\nI0317 11:16:19.321778 571 log.go:172] (0xc0006c14a0) (1) Data frame handling\nI0317 11:16:19.321786 571 log.go:172] (0xc0006c14a0) (1) Data frame sent\nI0317 11:16:19.321935 571 log.go:172] (0xc000154840) (0xc0006c14a0) Stream removed, broadcasting: 1\nI0317 11:16:19.322076 571 log.go:172] (0xc000154840) (0xc0006c14a0) Stream removed, broadcasting: 1\nI0317 11:16:19.322099 571 log.go:172] (0xc000154840) (0xc000236000) Stream removed, broadcasting: 3\nI0317 11:16:19.322216 571 log.go:172] (0xc000154840) Go away received\nI0317 11:16:19.322249 571 log.go:172] (0xc000154840) (0xc0006c1540) Stream removed, broadcasting: 5\n" Mar 17 11:16:19.324: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 17 11:16:19.324: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 17 11:16:19.329: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 17 11:16:29.494: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 17 11:16:29.494: INFO: Waiting for statefulset status.replicas updated to 0 Mar 17 11:16:29.509: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:16:29.509: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC }] Mar 17 11:16:29.509: INFO: Mar 17 11:16:29.509: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 17 11:16:30.514: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997388351s Mar 17 11:16:31.519: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992512153s Mar 17 11:16:32.524: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987727575s Mar 17 11:16:33.529: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982715687s Mar 17 11:16:34.534: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.977645314s Mar 17 11:16:35.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.972627931s Mar 17 11:16:36.553: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.958770509s Mar 17 11:16:37.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.953184335s Mar 17 11:16:38.563: INFO: Verifying statefulset ss doesn't scale past 3 for another 948.449769ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-kl4f6 Mar 17 11:16:39.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:16:39.784: INFO: stderr: "I0317 11:16:39.701279 593 log.go:172] (0xc00014c790) (0xc00047f4a0) Create stream\nI0317 11:16:39.701338 593 log.go:172] (0xc00014c790) (0xc00047f4a0) Stream added, broadcasting: 1\nI0317 11:16:39.703884 593 log.go:172] (0xc00014c790) Reply frame received for 1\nI0317 11:16:39.703943 593 log.go:172] (0xc00014c790) (0xc0004ba000) Create stream\nI0317 11:16:39.703959 593 log.go:172] (0xc00014c790) (0xc0004ba000) Stream added, broadcasting: 3\nI0317 11:16:39.705039 593 log.go:172] (0xc00014c790) Reply frame received for 3\nI0317 11:16:39.705076 593 log.go:172] (0xc00014c790) (0xc00036e000) Create stream\nI0317 11:16:39.705087 593 log.go:172] (0xc00014c790) (0xc00036e000) Stream added, broadcasting: 5\nI0317 11:16:39.706348 593 log.go:172] (0xc00014c790) Reply frame received for 5\nI0317 11:16:39.779126 593 log.go:172] (0xc00014c790) Data frame received for 3\nI0317 11:16:39.779180 593 log.go:172] (0xc0004ba000) (3) Data frame handling\nI0317 11:16:39.779197 593 log.go:172] (0xc0004ba000) (3) Data frame sent\nI0317 11:16:39.779211 593 log.go:172] (0xc00014c790) Data frame received for 3\nI0317 11:16:39.779221 593 log.go:172] (0xc0004ba000) (3) Data frame handling\nI0317 11:16:39.779249 593 log.go:172] (0xc00014c790) Data frame received for 5\nI0317 11:16:39.779263 593 log.go:172] (0xc00036e000) (5) Data frame handling\nI0317 11:16:39.780931 593 log.go:172] (0xc00014c790) Data frame received for 1\nI0317 11:16:39.780949 593 log.go:172] (0xc00047f4a0) (1) Data frame handling\nI0317 11:16:39.780955 593 log.go:172] (0xc00047f4a0) (1) Data frame sent\nI0317 11:16:39.780967 593 log.go:172] (0xc00014c790) (0xc00047f4a0) Stream removed, broadcasting: 1\nI0317 11:16:39.780979 593 log.go:172] (0xc00014c790) Go away received\nI0317 11:16:39.781212 593 log.go:172] (0xc00014c790) (0xc00047f4a0) Stream removed, broadcasting: 1\nI0317 11:16:39.781228 593 log.go:172] (0xc00014c790) (0xc0004ba000) Stream removed, broadcasting: 3\nI0317 11:16:39.781234 593 log.go:172] (0xc00014c790) (0xc00036e000) Stream removed, broadcasting: 5\n" Mar 17 11:16:39.785: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 17 11:16:39.785: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 17 11:16:39.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:16:39.973: INFO: stderr: "I0317 11:16:39.903157 615 log.go:172] (0xc0008122c0) (0xc0003114a0) Create stream\nI0317 11:16:39.903236 615 log.go:172] (0xc0008122c0) (0xc0003114a0) Stream added, broadcasting: 1\nI0317 11:16:39.906601 615 log.go:172] (0xc0008122c0) Reply frame received for 1\nI0317 11:16:39.906654 615 log.go:172] (0xc0008122c0) (0xc0005cc000) Create stream\nI0317 11:16:39.906670 615 log.go:172] (0xc0008122c0) (0xc0005cc000) Stream added, broadcasting: 3\nI0317 11:16:39.907965 615 log.go:172] (0xc0008122c0) Reply frame received for 3\nI0317 11:16:39.908013 615 log.go:172] (0xc0008122c0) (0xc0005cc0a0) Create stream\nI0317 11:16:39.908027 615 log.go:172] (0xc0008122c0) (0xc0005cc0a0) Stream added, broadcasting: 5\nI0317 11:16:39.909003 615 log.go:172] (0xc0008122c0) Reply frame received for 5\nI0317 11:16:39.968749 615 log.go:172] (0xc0008122c0) Data frame received for 3\nI0317 11:16:39.968776 615 log.go:172] (0xc0005cc000) (3) Data frame handling\nI0317 11:16:39.968788 615 log.go:172] (0xc0005cc000) (3) Data frame sent\nI0317 11:16:39.968814 615 log.go:172] (0xc0008122c0) Data frame received for 5\nI0317 11:16:39.968822 615 log.go:172] (0xc0005cc0a0) (5) Data frame handling\nI0317 11:16:39.968832 615 log.go:172] (0xc0005cc0a0) (5) Data frame sent\nI0317 11:16:39.968842 615 log.go:172] (0xc0008122c0) Data frame received for 5\nI0317 11:16:39.968851 615 log.go:172] (0xc0005cc0a0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0317 11:16:39.968962 615 log.go:172] (0xc0008122c0) Data frame received for 3\nI0317 11:16:39.968987 615 log.go:172] (0xc0005cc000) (3) Data frame handling\nI0317 11:16:39.970661 615 log.go:172] (0xc0008122c0) Data frame received for 1\nI0317 11:16:39.970682 615 log.go:172] (0xc0003114a0) (1) Data frame handling\nI0317 11:16:39.970689 615 log.go:172] (0xc0003114a0) (1) Data frame sent\nI0317 11:16:39.970714 615 log.go:172] (0xc0008122c0) (0xc0003114a0) Stream removed, broadcasting: 1\nI0317 11:16:39.970735 615 log.go:172] (0xc0008122c0) Go away received\nI0317 11:16:39.970930 615 log.go:172] (0xc0008122c0) (0xc0003114a0) Stream removed, broadcasting: 1\nI0317 11:16:39.970946 615 log.go:172] (0xc0008122c0) (0xc0005cc000) Stream removed, broadcasting: 3\nI0317 11:16:39.970953 615 log.go:172] (0xc0008122c0) (0xc0005cc0a0) Stream removed, broadcasting: 5\n" Mar 17 11:16:39.974: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 17 11:16:39.974: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 17 11:16:39.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:16:40.162: INFO: stderr: "I0317 11:16:40.098985 638 log.go:172] (0xc000138840) (0xc0006572c0) Create stream\nI0317 11:16:40.099033 638 log.go:172] (0xc000138840) (0xc0006572c0) Stream added, broadcasting: 1\nI0317 11:16:40.101603 638 log.go:172] (0xc000138840) Reply frame received for 1\nI0317 11:16:40.101665 638 log.go:172] (0xc000138840) (0xc000604000) Create stream\nI0317 11:16:40.101687 638 log.go:172] (0xc000138840) (0xc000604000) Stream added, broadcasting: 3\nI0317 11:16:40.102957 638 log.go:172] (0xc000138840) Reply frame received for 3\nI0317 11:16:40.103008 638 log.go:172] (0xc000138840) (0xc0000d6000) Create stream\nI0317 11:16:40.103027 638 log.go:172] (0xc000138840) (0xc0000d6000) Stream added, broadcasting: 5\nI0317 11:16:40.104115 638 log.go:172] (0xc000138840) Reply frame received for 5\nI0317 11:16:40.156679 638 log.go:172] (0xc000138840) Data frame received for 3\nI0317 11:16:40.156718 638 log.go:172] (0xc000604000) (3) Data frame handling\nI0317 11:16:40.156734 638 log.go:172] (0xc000604000) (3) Data frame sent\nI0317 11:16:40.156778 638 log.go:172] (0xc000138840) Data frame received for 3\nI0317 11:16:40.156788 638 log.go:172] (0xc000604000) (3) Data frame handling\nI0317 11:16:40.156809 638 log.go:172] (0xc000138840) Data frame received for 5\nI0317 11:16:40.156818 638 log.go:172] (0xc0000d6000) (5) Data frame handling\nI0317 11:16:40.156829 638 log.go:172] (0xc0000d6000) (5) Data frame sent\nI0317 11:16:40.156838 638 log.go:172] (0xc000138840) Data frame received for 5\nI0317 11:16:40.156847 638 log.go:172] (0xc0000d6000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0317 11:16:40.158803 638 log.go:172] (0xc000138840) Data frame received for 1\nI0317 11:16:40.158841 638 log.go:172] (0xc0006572c0) (1) Data frame handling\nI0317 11:16:40.158864 638 log.go:172] (0xc0006572c0) (1) Data frame sent\nI0317 11:16:40.158887 638 log.go:172] (0xc000138840) (0xc0006572c0) Stream removed, broadcasting: 1\nI0317 11:16:40.158930 638 log.go:172] (0xc000138840) Go away received\nI0317 11:16:40.159171 638 log.go:172] (0xc000138840) (0xc0006572c0) Stream removed, broadcasting: 1\nI0317 11:16:40.159198 638 log.go:172] (0xc000138840) (0xc000604000) Stream removed, broadcasting: 3\nI0317 11:16:40.159205 638 log.go:172] (0xc000138840) (0xc0000d6000) Stream removed, broadcasting: 5\n" Mar 17 11:16:40.162: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 17 11:16:40.162: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 17 11:16:40.166: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 17 11:16:50.171: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 17 11:16:50.171: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 17 11:16:50.171: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 17 11:16:50.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 17 11:16:50.407: INFO: stderr: "I0317 11:16:50.304534 661 log.go:172] (0xc0001486e0) (0xc0000ff360) Create stream\nI0317 11:16:50.304613 661 log.go:172] (0xc0001486e0) (0xc0000ff360) Stream added, broadcasting: 1\nI0317 11:16:50.307138 661 log.go:172] (0xc0001486e0) Reply frame received for 1\nI0317 11:16:50.307184 661 log.go:172] (0xc0001486e0) (0xc000418000) Create stream\nI0317 11:16:50.307200 661 log.go:172] (0xc0001486e0) (0xc000418000) Stream added, broadcasting: 3\nI0317 11:16:50.308305 661 log.go:172] (0xc0001486e0) Reply frame received for 3\nI0317 11:16:50.308338 661 log.go:172] (0xc0001486e0) (0xc0004180a0) Create stream\nI0317 11:16:50.308349 661 log.go:172] (0xc0001486e0) (0xc0004180a0) Stream added, broadcasting: 5\nI0317 11:16:50.309319 661 log.go:172] (0xc0001486e0) Reply frame received for 5\nI0317 11:16:50.401782 661 log.go:172] (0xc0001486e0) Data frame received for 5\nI0317 11:16:50.401822 661 log.go:172] (0xc0004180a0) (5) Data frame handling\nI0317 11:16:50.401860 661 log.go:172] (0xc0001486e0) Data frame received for 3\nI0317 11:16:50.401945 661 log.go:172] (0xc000418000) (3) Data frame handling\nI0317 11:16:50.401974 661 log.go:172] (0xc000418000) (3) Data frame sent\nI0317 11:16:50.401990 661 log.go:172] (0xc0001486e0) Data frame received for 3\nI0317 11:16:50.402000 661 log.go:172] (0xc000418000) (3) Data frame handling\nI0317 11:16:50.403298 661 log.go:172] (0xc0001486e0) Data frame received for 1\nI0317 11:16:50.403327 661 log.go:172] (0xc0000ff360) (1) Data frame handling\nI0317 11:16:50.403351 661 log.go:172] (0xc0000ff360) (1) Data frame sent\nI0317 11:16:50.403363 661 log.go:172] (0xc0001486e0) (0xc0000ff360) Stream removed, broadcasting: 1\nI0317 11:16:50.403467 661 log.go:172] (0xc0001486e0) Go away received\nI0317 11:16:50.403528 661 log.go:172] (0xc0001486e0) (0xc0000ff360) Stream removed, broadcasting: 1\nI0317 11:16:50.403565 661 log.go:172] (0xc0001486e0) (0xc000418000) Stream removed, broadcasting: 3\nI0317 11:16:50.403585 661 log.go:172] (0xc0001486e0) (0xc0004180a0) Stream removed, broadcasting: 5\n" Mar 17 11:16:50.408: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 17 11:16:50.408: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 17 11:16:50.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 17 11:16:50.659: INFO: stderr: "I0317 11:16:50.541319 685 log.go:172] (0xc00016c840) (0xc0002e72c0) Create stream\nI0317 11:16:50.541367 685 log.go:172] (0xc00016c840) (0xc0002e72c0) Stream added, broadcasting: 1\nI0317 11:16:50.543376 685 log.go:172] (0xc00016c840) Reply frame received for 1\nI0317 11:16:50.543420 685 log.go:172] (0xc00016c840) (0xc0003ae000) Create stream\nI0317 11:16:50.543434 685 log.go:172] (0xc00016c840) (0xc0003ae000) Stream added, broadcasting: 3\nI0317 11:16:50.544117 685 log.go:172] (0xc00016c840) Reply frame received for 3\nI0317 11:16:50.544148 685 log.go:172] (0xc00016c840) (0xc0002e7360) Create stream\nI0317 11:16:50.544164 685 log.go:172] (0xc00016c840) (0xc0002e7360) Stream added, broadcasting: 5\nI0317 11:16:50.544837 685 log.go:172] (0xc00016c840) Reply frame received for 5\nI0317 11:16:50.652804 685 log.go:172] (0xc00016c840) Data frame received for 3\nI0317 11:16:50.652868 685 log.go:172] (0xc0003ae000) (3) Data frame handling\nI0317 11:16:50.652899 685 log.go:172] (0xc0003ae000) (3) Data frame sent\nI0317 11:16:50.652927 685 log.go:172] (0xc00016c840) Data frame received for 3\nI0317 11:16:50.652952 685 log.go:172] (0xc0003ae000) (3) Data frame handling\nI0317 11:16:50.653044 685 log.go:172] (0xc00016c840) Data frame received for 5\nI0317 11:16:50.653083 685 log.go:172] (0xc0002e7360) (5) Data frame handling\nI0317 11:16:50.655378 685 log.go:172] (0xc00016c840) Data frame received for 1\nI0317 11:16:50.655403 685 log.go:172] (0xc0002e72c0) (1) Data frame handling\nI0317 11:16:50.655417 685 log.go:172] (0xc0002e72c0) (1) Data frame sent\nI0317 11:16:50.655432 685 log.go:172] (0xc00016c840) (0xc0002e72c0) Stream removed, broadcasting: 1\nI0317 11:16:50.655452 685 log.go:172] (0xc00016c840) Go away received\nI0317 11:16:50.655707 685 log.go:172] (0xc00016c840) (0xc0002e72c0) Stream removed, broadcasting: 1\nI0317 11:16:50.655732 685 log.go:172] (0xc00016c840) (0xc0003ae000) Stream removed, broadcasting: 3\nI0317 11:16:50.655745 685 log.go:172] (0xc00016c840) (0xc0002e7360) Stream removed, broadcasting: 5\n" Mar 17 11:16:50.659: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 17 11:16:50.659: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 17 11:16:50.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 17 11:16:50.907: INFO: stderr: "I0317 11:16:50.810755 707 log.go:172] (0xc0007be160) (0xc0006f0640) Create stream\nI0317 11:16:50.810807 707 log.go:172] (0xc0007be160) (0xc0006f0640) Stream added, broadcasting: 1\nI0317 11:16:50.812588 707 log.go:172] (0xc0007be160) Reply frame received for 1\nI0317 11:16:50.812621 707 log.go:172] (0xc0007be160) (0xc000340c80) Create stream\nI0317 11:16:50.812630 707 log.go:172] (0xc0007be160) (0xc000340c80) Stream added, broadcasting: 3\nI0317 11:16:50.813328 707 log.go:172] (0xc0007be160) Reply frame received for 3\nI0317 11:16:50.813366 707 log.go:172] (0xc0007be160) (0xc00081a000) Create stream\nI0317 11:16:50.813378 707 log.go:172] (0xc0007be160) (0xc00081a000) Stream added, broadcasting: 5\nI0317 11:16:50.814108 707 log.go:172] (0xc0007be160) Reply frame received for 5\nI0317 11:16:50.900499 707 log.go:172] (0xc0007be160) Data frame received for 3\nI0317 11:16:50.900651 707 log.go:172] (0xc000340c80) (3) Data frame handling\nI0317 11:16:50.900669 707 log.go:172] (0xc000340c80) (3) Data frame sent\nI0317 11:16:50.900687 707 log.go:172] (0xc0007be160) Data frame received for 5\nI0317 11:16:50.900699 707 log.go:172] (0xc00081a000) (5) Data frame handling\nI0317 11:16:50.900835 707 log.go:172] (0xc0007be160) Data frame received for 3\nI0317 11:16:50.900870 707 log.go:172] (0xc000340c80) (3) Data frame handling\nI0317 11:16:50.902668 707 log.go:172] (0xc0007be160) Data frame received for 1\nI0317 11:16:50.902687 707 log.go:172] (0xc0006f0640) (1) Data frame handling\nI0317 11:16:50.902709 707 log.go:172] (0xc0006f0640) (1) Data frame sent\nI0317 11:16:50.902731 707 log.go:172] (0xc0007be160) (0xc0006f0640) Stream removed, broadcasting: 1\nI0317 11:16:50.902910 707 log.go:172] (0xc0007be160) (0xc0006f0640) Stream removed, broadcasting: 1\nI0317 11:16:50.902929 707 log.go:172] (0xc0007be160) (0xc000340c80) Stream removed, broadcasting: 3\nI0317 11:16:50.902936 707 log.go:172] (0xc0007be160) (0xc00081a000) Stream removed, broadcasting: 5\n" Mar 17 11:16:50.907: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 17 11:16:50.907: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 17 11:16:50.907: INFO: Waiting for statefulset status.replicas updated to 0 Mar 17 11:16:50.911: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Mar 17 11:17:00.920: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 17 11:17:00.920: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 17 11:17:00.920: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 17 11:17:00.935: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:17:00.935: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC }] Mar 17 11:17:00.935: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC }] Mar 17 11:17:00.935: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC }] Mar 17 11:17:00.935: INFO: Mar 17 11:17:00.935: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 17 11:17:01.989: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:17:01.990: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC }] Mar 17 11:17:01.990: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC }] Mar 17 11:17:01.990: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC }] Mar 17 11:17:01.990: INFO: Mar 17 11:17:01.990: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 17 11:17:02.995: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:17:02.995: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC }] Mar 17 11:17:02.995: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC }] Mar 17 11:17:02.995: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC }] Mar 17 11:17:02.996: INFO: Mar 17 11:17:02.996: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 17 11:17:04.000: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:17:04.000: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC }] Mar 17 11:17:04.000: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC }] Mar 17 11:17:04.000: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC }] Mar 17 11:17:04.000: INFO: Mar 17 11:17:04.000: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 17 11:17:05.006: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:17:05.006: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC }] Mar 17 11:17:05.006: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC }] Mar 17 11:17:05.006: INFO: Mar 17 11:17:05.006: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 17 11:17:06.015: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:17:06.015: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC }] Mar 17 11:17:06.015: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC }] Mar 17 11:17:06.015: INFO: Mar 17 11:17:06.015: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 17 11:17:07.020: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:17:07.020: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC }] Mar 17 11:17:07.020: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC }] Mar 17 11:17:07.021: INFO: Mar 17 11:17:07.021: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 17 11:17:08.024: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:17:08.024: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC }] Mar 17 11:17:08.024: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC }] Mar 17 11:17:08.024: INFO: Mar 17 11:17:08.024: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 17 11:17:09.029: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:17:09.029: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC }] Mar 17 11:17:09.029: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC }] Mar 17 11:17:09.029: INFO: Mar 17 11:17:09.029: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 17 11:17:10.034: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:17:10.034: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:09 +0000 UTC }] Mar 17 11:17:10.034: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:16:29 +0000 UTC }] Mar 17 11:17:10.034: INFO: Mar 17 11:17:10.034: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-kl4f6 Mar 17 11:17:11.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:17:11.181: INFO: rc: 1 Mar 17 11:17:11.181: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001ff1320 exit status 1 true [0xc00122e3e0 0xc00122e3f8 0xc00122e410] [0xc00122e3e0 0xc00122e3f8 0xc00122e410] [0xc00122e3f0 0xc00122e408] [0x935700 0x935700] 0xc001ed8000 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Mar 17 11:17:21.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:17:21.277: INFO: rc: 1 Mar 17 11:17:21.277: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020cdec0 exit status 1 true [0xc00187c620 0xc00187c640 0xc00187c658] [0xc00187c620 0xc00187c640 0xc00187c658] [0xc00187c638 0xc00187c650] [0x935700 0x935700] 0xc000f69380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:17:31.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:17:31.363: INFO: rc: 1 Mar 17 11:17:31.363: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002133560 exit status 1 true [0xc001048500 0xc001048518 0xc001048550] [0xc001048500 0xc001048518 0xc001048550] [0xc001048510 0xc001048548] [0x935700 0x935700] 0xc001963140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:17:41.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:17:41.455: INFO: rc: 1 Mar 17 11:17:41.455: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ff1440 exit status 1 true [0xc00122e418 0xc00122e430 0xc00122e448] [0xc00122e418 0xc00122e430 0xc00122e448] [0xc00122e428 0xc00122e440] [0x935700 0x935700] 0xc001ed82a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:17:51.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:17:51.541: INFO: rc: 1 Mar 17 11:17:51.541: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021336b0 exit status 1 true [0xc001048558 0xc001048570 0xc0010485a0] [0xc001048558 0xc001048570 0xc0010485a0] [0xc001048568 0xc001048588] [0x935700 0x935700] 0xc001963c80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:18:01.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:18:01.641: INFO: rc: 1 Mar 17 11:18:01.641: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002680030 exit status 1 true [0xc00187c660 0xc00187c678 0xc00187c690] [0xc00187c660 0xc00187c678 0xc00187c690] [0xc00187c670 0xc00187c688] [0x935700 0x935700] 0xc000f69920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:18:11.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:18:11.732: INFO: rc: 1 Mar 17 11:18:11.732: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8c120 exit status 1 true [0xc0000e8208 0xc001ed0010 0xc001ed0028] [0xc0000e8208 0xc001ed0010 0xc001ed0028] [0xc001ed0008 0xc001ed0020] [0x935700 0x935700] 0xc0017ca180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:18:21.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:18:21.821: INFO: rc: 1 Mar 17 11:18:21.821: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f4e120 exit status 1 true [0xc000ed6020 0xc000ed6128 0xc000ed6288] [0xc000ed6020 0xc000ed6128 0xc000ed6288] [0xc000ed6098 0xc000ed6210] [0x935700 0x935700] 0xc00102e2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:18:31.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:18:31.918: INFO: rc: 1 Mar 17 11:18:31.918: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018b0120 exit status 1 true [0xc000a36000 0xc000a36038 0xc000a36078] [0xc000a36000 0xc000a36038 0xc000a36078] [0xc000a36028 0xc000a36060] [0x935700 0x935700] 0xc000df6e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:18:41.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:18:42.000: INFO: rc: 1 Mar 17 11:18:42.000: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001da4120 exit status 1 true [0xc000ad2000 0xc000ad2018 0xc000ad2030] [0xc000ad2000 0xc000ad2018 0xc000ad2030] [0xc000ad2010 0xc000ad2028] [0x935700 0x935700] 0xc001a19200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:18:52.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:18:52.087: INFO: rc: 1 Mar 17 11:18:52.088: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018b0390 exit status 1 true [0xc000a36098 0xc000a360f8 0xc000a36130] [0xc000a36098 0xc000a360f8 0xc000a36130] [0xc000a360e0 0xc000a36120] [0x935700 0x935700] 0xc00174a360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:19:02.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:19:02.191: INFO: rc: 1 Mar 17 11:19:02.191: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8c300 exit status 1 true [0xc001ed0030 0xc001ed0048 0xc001ed0060] [0xc001ed0030 0xc001ed0048 0xc001ed0060] [0xc001ed0040 0xc001ed0058] [0x935700 0x935700] 0xc0017ca660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:19:12.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:19:12.281: INFO: rc: 1 Mar 17 11:19:12.281: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018b04e0 exit status 1 true [0xc000a36148 0xc000a361a0 0xc000a361d8] [0xc000a36148 0xc000a361a0 0xc000a361d8] [0xc000a36188 0xc000a361c8] [0x935700 0x935700] 0xc00166e780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:19:22.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:19:22.369: INFO: rc: 1 Mar 17 11:19:22.369: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f4e270 exit status 1 true [0xc000ed6320 0xc000ed64c0 0xc000ed6578] [0xc000ed6320 0xc000ed64c0 0xc000ed6578] [0xc000ed6438 0xc000ed6538] [0x935700 0x935700] 0xc00102ede0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:19:32.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:19:32.455: INFO: rc: 1 Mar 17 11:19:32.455: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001da4270 exit status 1 true [0xc000ad2038 0xc000ad2050 0xc000ad2068] [0xc000ad2038 0xc000ad2050 0xc000ad2068] [0xc000ad2048 0xc000ad2060] [0x935700 0x935700] 0xc00174cb40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:19:42.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:19:42.545: INFO: rc: 1 Mar 17 11:19:42.545: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001da4390 exit status 1 true [0xc000ad2070 0xc000ad2088 0xc000ad20a0] [0xc000ad2070 0xc000ad2088 0xc000ad20a0] [0xc000ad2080 0xc000ad2098] [0x935700 0x935700] 0xc001946720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:19:52.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:19:52.628: INFO: rc: 1 Mar 17 11:19:52.628: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001da4540 exit status 1 true [0xc000ad20a8 0xc000ad20c0 0xc000ad20d8] [0xc000ad20a8 0xc000ad20c0 0xc000ad20d8] [0xc000ad20b8 0xc000ad20d0] [0x935700 0x935700] 0xc0019672c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:20:02.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:20:02.717: INFO: rc: 1 Mar 17 11:20:02.717: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001da4690 exit status 1 true [0xc000ad20e0 0xc000ad20f8 0xc000ad2110] [0xc000ad20e0 0xc000ad20f8 0xc000ad2110] [0xc000ad20f0 0xc000ad2108] [0x935700 0x935700] 0xc001967e00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:20:12.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:20:12.810: INFO: rc: 1 Mar 17 11:20:12.810: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8c510 exit status 1 true [0xc001ed0070 0xc001ed0088 0xc001ed00a0] [0xc001ed0070 0xc001ed0088 0xc001ed00a0] [0xc001ed0080 0xc001ed0098] [0x935700 0x935700] 0xc0017cacc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:20:22.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:20:22.891: INFO: rc: 1 Mar 17 11:20:22.891: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8c150 exit status 1 true [0xc0000e8208 0xc000ad2008 0xc000ad2020] [0xc0000e8208 0xc000ad2008 0xc000ad2020] [0xc000ad2000 0xc000ad2018] [0x935700 0x935700] 0xc001967680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:20:32.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:20:32.974: INFO: rc: 1 Mar 17 11:20:32.975: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018b01b0 exit status 1 true [0xc001ed0000 0xc001ed0018 0xc001ed0030] [0xc001ed0000 0xc001ed0018 0xc001ed0030] [0xc001ed0010 0xc001ed0028] [0x935700 0x935700] 0xc001947b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:20:42.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:20:43.059: INFO: rc: 1 Mar 17 11:20:43.060: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001da4150 exit status 1 true [0xc000a36000 0xc000a36038 0xc000a36078] [0xc000a36000 0xc000a36038 0xc000a36078] [0xc000a36028 0xc000a36060] [0x935700 0x935700] 0xc0013c3b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:20:53.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:20:53.154: INFO: rc: 1 Mar 17 11:20:53.154: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001da42a0 exit status 1 true [0xc000a36098 0xc000a360f8 0xc000a36130] [0xc000a36098 0xc000a360f8 0xc000a36130] [0xc000a360e0 0xc000a36120] [0x935700 0x935700] 0xc00174b500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:21:03.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:21:03.238: INFO: rc: 1 Mar 17 11:21:03.238: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8c540 exit status 1 true [0xc000ad2028 0xc000ad2040 0xc000ad2058] [0xc000ad2028 0xc000ad2040 0xc000ad2058] [0xc000ad2038 0xc000ad2050] [0x935700 0x935700] 0xc001a18060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:21:13.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:21:13.332: INFO: rc: 1 Mar 17 11:21:13.332: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f4e150 exit status 1 true [0xc000ed6020 0xc000ed6128 0xc000ed6288] [0xc000ed6020 0xc000ed6128 0xc000ed6288] [0xc000ed6098 0xc000ed6210] [0x935700 0x935700] 0xc000df6e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:21:23.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:21:23.417: INFO: rc: 1 Mar 17 11:21:23.417: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001da4420 exit status 1 true [0xc000a36148 0xc000a361a0 0xc000a361d8] [0xc000a36148 0xc000a361a0 0xc000a361d8] [0xc000a36188 0xc000a361c8] [0x935700 0x935700] 0xc0012fe240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:21:33.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:21:33.504: INFO: rc: 1 Mar 17 11:21:33.504: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001da4600 exit status 1 true [0xc000a361f0 0xc000a36230 0xc000a36270] [0xc000a361f0 0xc000a36230 0xc000a36270] [0xc000a36220 0xc000a36258] [0x935700 0x935700] 0xc0012fe8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:21:43.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:21:43.594: INFO: rc: 1 Mar 17 11:21:43.594: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f4e2a0 exit status 1 true [0xc000ed6320 0xc000ed64c0 0xc000ed6578] [0xc000ed6320 0xc000ed64c0 0xc000ed6578] [0xc000ed6438 0xc000ed6538] [0x935700 0x935700] 0xc0017ca180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:21:53.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:21:53.689: INFO: rc: 1 Mar 17 11:21:53.689: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8c6c0 exit status 1 true [0xc000ad2060 0xc000ad2078 0xc000ad2090] [0xc000ad2060 0xc000ad2078 0xc000ad2090] [0xc000ad2070 0xc000ad2088] [0x935700 0x935700] 0xc001a19b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:22:03.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:22:03.781: INFO: rc: 1 Mar 17 11:22:03.781: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f4e3f0 exit status 1 true [0xc000ed6590 0xc000ed6648 0xc000ed6710] [0xc000ed6590 0xc000ed6648 0xc000ed6710] [0xc000ed6608 0xc000ed6668] [0x935700 0x935700] 0xc0017ca660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 17 11:22:13.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kl4f6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:22:13.876: INFO: rc: 1 Mar 17 11:22:13.876: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Mar 17 11:22:13.877: INFO: Scaling statefulset ss to 0 Mar 17 11:22:13.885: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 17 11:22:13.888: INFO: Deleting all statefulset in ns e2e-tests-statefulset-kl4f6 Mar 17 11:22:13.890: INFO: Scaling statefulset ss to 0 Mar 17 11:22:13.898: INFO: Waiting for statefulset status.replicas updated to 0 Mar 17 11:22:13.901: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:22:13.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-kl4f6" for this suite. Mar 17 11:22:19.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:22:20.056: INFO: namespace: e2e-tests-statefulset-kl4f6, resource: bindings, ignored listing per whitelist Mar 17 11:22:20.062: INFO: namespace e2e-tests-statefulset-kl4f6 deletion completed in 6.097332692s • [SLOW TEST:371.088 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:22:20.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 11:22:20.186: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90eb7c9f-6841-11ea-b08f-0242ac11000f" in namespace "e2e-tests-downward-api-q7z45" to be "success or failure" Mar 17 11:22:20.190: INFO: Pod "downwardapi-volume-90eb7c9f-6841-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.772531ms Mar 17 11:22:22.194: INFO: Pod "downwardapi-volume-90eb7c9f-6841-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007794379s Mar 17 11:22:24.198: INFO: Pod "downwardapi-volume-90eb7c9f-6841-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012118719s STEP: Saw pod success Mar 17 11:22:24.198: INFO: Pod "downwardapi-volume-90eb7c9f-6841-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:22:24.201: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-90eb7c9f-6841-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 11:22:24.245: INFO: Waiting for pod downwardapi-volume-90eb7c9f-6841-11ea-b08f-0242ac11000f to disappear Mar 17 11:22:24.250: INFO: Pod downwardapi-volume-90eb7c9f-6841-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:22:24.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-q7z45" for this suite. Mar 17 11:22:30.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:22:30.306: INFO: namespace: e2e-tests-downward-api-q7z45, resource: bindings, ignored listing per whitelist Mar 17 11:22:30.339: INFO: namespace e2e-tests-downward-api-q7z45 deletion completed in 6.085579716s • [SLOW TEST:10.276 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:22:30.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Mar 17 11:22:30.961: INFO: created pod pod-service-account-defaultsa Mar 17 11:22:30.961: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 17 11:22:30.964: INFO: created pod pod-service-account-mountsa Mar 17 11:22:30.964: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 17 11:22:30.970: INFO: created pod pod-service-account-nomountsa Mar 17 11:22:30.970: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 17 11:22:31.001: INFO: created pod pod-service-account-defaultsa-mountspec Mar 17 11:22:31.001: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 17 11:22:31.006: INFO: created pod pod-service-account-mountsa-mountspec Mar 17 11:22:31.006: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 17 11:22:31.026: INFO: created pod pod-service-account-nomountsa-mountspec Mar 17 11:22:31.026: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 17 11:22:31.098: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 17 11:22:31.098: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 17 11:22:31.110: INFO: created pod pod-service-account-mountsa-nomountspec Mar 17 11:22:31.110: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 17 11:22:31.127: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 17 11:22:31.127: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:22:31.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-sx2sf" for this suite. Mar 17 11:22:57.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:22:57.356: INFO: namespace: e2e-tests-svcaccounts-sx2sf, resource: bindings, ignored listing per whitelist Mar 17 11:22:57.394: INFO: namespace e2e-tests-svcaccounts-sx2sf deletion completed in 26.209678248s • [SLOW TEST:27.054 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:22:57.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 17 11:23:02.069: INFO: Successfully updated pod "annotationupdatea72dc9a3-6841-11ea-b08f-0242ac11000f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:23:04.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4v8x4" for this suite. Mar 17 11:23:26.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:23:26.176: INFO: namespace: e2e-tests-projected-4v8x4, resource: bindings, ignored listing per whitelist Mar 17 11:23:26.219: INFO: namespace e2e-tests-projected-4v8x4 deletion completed in 22.114357002s • [SLOW TEST:28.825 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:23:26.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 11:23:26.313: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b856e836-6841-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-z69p4" to be "success or failure" Mar 17 11:23:26.328: INFO: Pod "downwardapi-volume-b856e836-6841-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.860466ms Mar 17 11:23:28.332: INFO: Pod "downwardapi-volume-b856e836-6841-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018777978s Mar 17 11:23:30.336: INFO: Pod "downwardapi-volume-b856e836-6841-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022989102s STEP: Saw pod success Mar 17 11:23:30.336: INFO: Pod "downwardapi-volume-b856e836-6841-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:23:30.340: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b856e836-6841-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 11:23:30.399: INFO: Waiting for pod downwardapi-volume-b856e836-6841-11ea-b08f-0242ac11000f to disappear Mar 17 11:23:30.403: INFO: Pod downwardapi-volume-b856e836-6841-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:23:30.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z69p4" for this suite. Mar 17 11:23:36.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:23:36.470: INFO: namespace: e2e-tests-projected-z69p4, resource: bindings, ignored listing per whitelist Mar 17 11:23:36.497: INFO: namespace e2e-tests-projected-z69p4 deletion completed in 6.090519844s • [SLOW TEST:10.277 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:23:36.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 17 11:23:36.615: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-r4d8v,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4d8v/configmaps/e2e-watch-test-resource-version,UID:be76d19a-6841-11ea-99e8-0242ac110002,ResourceVersion:316705,Generation:0,CreationTimestamp:2020-03-17 11:23:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 17 11:23:36.615: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-r4d8v,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4d8v/configmaps/e2e-watch-test-resource-version,UID:be76d19a-6841-11ea-99e8-0242ac110002,ResourceVersion:316706,Generation:0,CreationTimestamp:2020-03-17 11:23:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:23:36.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-r4d8v" for this suite. Mar 17 11:23:42.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:23:42.681: INFO: namespace: e2e-tests-watch-r4d8v, resource: bindings, ignored listing per whitelist Mar 17 11:23:42.730: INFO: namespace e2e-tests-watch-r4d8v deletion completed in 6.111369303s • [SLOW TEST:6.233 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:23:42.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-z2fk STEP: Creating a pod to test atomic-volume-subpath Mar 17 11:23:42.896: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-z2fk" in namespace "e2e-tests-subpath-rdx2n" to be "success or failure" Mar 17 11:23:42.913: INFO: Pod "pod-subpath-test-downwardapi-z2fk": Phase="Pending", Reason="", readiness=false. Elapsed: 17.141925ms Mar 17 11:23:44.917: INFO: Pod "pod-subpath-test-downwardapi-z2fk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021029473s Mar 17 11:23:46.921: INFO: Pod "pod-subpath-test-downwardapi-z2fk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025090558s Mar 17 11:23:48.924: INFO: Pod "pod-subpath-test-downwardapi-z2fk": Phase="Running", Reason="", readiness=false. Elapsed: 6.028278218s Mar 17 11:23:50.928: INFO: Pod "pod-subpath-test-downwardapi-z2fk": Phase="Running", Reason="", readiness=false. Elapsed: 8.032714304s Mar 17 11:23:52.933: INFO: Pod "pod-subpath-test-downwardapi-z2fk": Phase="Running", Reason="", readiness=false. Elapsed: 10.037231689s Mar 17 11:23:54.937: INFO: Pod "pod-subpath-test-downwardapi-z2fk": Phase="Running", Reason="", readiness=false. Elapsed: 12.041601536s Mar 17 11:23:56.942: INFO: Pod "pod-subpath-test-downwardapi-z2fk": Phase="Running", Reason="", readiness=false. Elapsed: 14.045834021s Mar 17 11:23:58.946: INFO: Pod "pod-subpath-test-downwardapi-z2fk": Phase="Running", Reason="", readiness=false. Elapsed: 16.049736893s Mar 17 11:24:00.950: INFO: Pod "pod-subpath-test-downwardapi-z2fk": Phase="Running", Reason="", readiness=false. Elapsed: 18.054181368s Mar 17 11:24:02.954: INFO: Pod "pod-subpath-test-downwardapi-z2fk": Phase="Running", Reason="", readiness=false. Elapsed: 20.058714572s Mar 17 11:24:04.959: INFO: Pod "pod-subpath-test-downwardapi-z2fk": Phase="Running", Reason="", readiness=false. Elapsed: 22.062731539s Mar 17 11:24:06.962: INFO: Pod "pod-subpath-test-downwardapi-z2fk": Phase="Running", Reason="", readiness=false. Elapsed: 24.065982249s Mar 17 11:24:08.966: INFO: Pod "pod-subpath-test-downwardapi-z2fk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.070052677s STEP: Saw pod success Mar 17 11:24:08.966: INFO: Pod "pod-subpath-test-downwardapi-z2fk" satisfied condition "success or failure" Mar 17 11:24:08.968: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-z2fk container test-container-subpath-downwardapi-z2fk: STEP: delete the pod Mar 17 11:24:09.023: INFO: Waiting for pod pod-subpath-test-downwardapi-z2fk to disappear Mar 17 11:24:09.033: INFO: Pod pod-subpath-test-downwardapi-z2fk no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-z2fk Mar 17 11:24:09.033: INFO: Deleting pod "pod-subpath-test-downwardapi-z2fk" in namespace "e2e-tests-subpath-rdx2n" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:24:09.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-rdx2n" for this suite. Mar 17 11:24:15.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:24:15.082: INFO: namespace: e2e-tests-subpath-rdx2n, resource: bindings, ignored listing per whitelist Mar 17 11:24:15.112: INFO: namespace e2e-tests-subpath-rdx2n deletion completed in 6.072595863s • [SLOW TEST:32.381 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:24:15.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 17 11:24:15.245: INFO: Waiting up to 5m0s for pod "pod-d57f5be8-6841-11ea-b08f-0242ac11000f" in namespace "e2e-tests-emptydir-b8mnx" to be "success or failure" Mar 17 11:24:15.249: INFO: Pod "pod-d57f5be8-6841-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.983746ms Mar 17 11:24:17.252: INFO: Pod "pod-d57f5be8-6841-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007520841s Mar 17 11:24:19.256: INFO: Pod "pod-d57f5be8-6841-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011250839s STEP: Saw pod success Mar 17 11:24:19.256: INFO: Pod "pod-d57f5be8-6841-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:24:19.259: INFO: Trying to get logs from node hunter-worker pod pod-d57f5be8-6841-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 11:24:19.298: INFO: Waiting for pod pod-d57f5be8-6841-11ea-b08f-0242ac11000f to disappear Mar 17 11:24:19.312: INFO: Pod pod-d57f5be8-6841-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:24:19.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-b8mnx" for this suite. Mar 17 11:24:25.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:24:25.435: INFO: namespace: e2e-tests-emptydir-b8mnx, resource: bindings, ignored listing per whitelist Mar 17 11:24:25.447: INFO: namespace e2e-tests-emptydir-b8mnx deletion completed in 6.114827854s • [SLOW TEST:10.336 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:24:25.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 17 11:24:25.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-p9p8d' Mar 17 11:24:27.475: INFO: stderr: "" Mar 17 11:24:27.475: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Mar 17 11:24:27.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-p9p8d' Mar 17 11:24:31.764: INFO: stderr: "" Mar 17 11:24:31.764: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:24:31.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-p9p8d" for this suite. Mar 17 11:24:37.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:24:37.834: INFO: namespace: e2e-tests-kubectl-p9p8d, resource: bindings, ignored listing per whitelist Mar 17 11:24:37.853: INFO: namespace e2e-tests-kubectl-p9p8d deletion completed in 6.085714577s • [SLOW TEST:12.405 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:24:37.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Mar 17 11:24:37.963: INFO: Waiting up to 5m0s for pod "client-containers-e308a858-6841-11ea-b08f-0242ac11000f" in namespace "e2e-tests-containers-zslt7" to be "success or failure" Mar 17 11:24:37.967: INFO: Pod "client-containers-e308a858-6841-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317378ms Mar 17 11:24:39.971: INFO: Pod "client-containers-e308a858-6841-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008527144s Mar 17 11:24:41.975: INFO: Pod "client-containers-e308a858-6841-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01231276s STEP: Saw pod success Mar 17 11:24:41.975: INFO: Pod "client-containers-e308a858-6841-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:24:41.978: INFO: Trying to get logs from node hunter-worker pod client-containers-e308a858-6841-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 11:24:42.010: INFO: Waiting for pod client-containers-e308a858-6841-11ea-b08f-0242ac11000f to disappear Mar 17 11:24:42.021: INFO: Pod client-containers-e308a858-6841-11ea-b08f-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:24:42.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-zslt7" for this suite. Mar 17 11:24:48.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:24:48.114: INFO: namespace: e2e-tests-containers-zslt7, resource: bindings, ignored listing per whitelist Mar 17 11:24:48.157: INFO: namespace e2e-tests-containers-zslt7 deletion completed in 6.132922029s • [SLOW TEST:10.304 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:24:48.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Mar 17 11:24:48.274: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix367429313/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:24:48.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gsmrn" for this suite. Mar 17 11:24:54.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:24:54.399: INFO: namespace: e2e-tests-kubectl-gsmrn, resource: bindings, ignored listing per whitelist Mar 17 11:24:54.445: INFO: namespace e2e-tests-kubectl-gsmrn deletion completed in 6.096066419s • [SLOW TEST:6.287 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:24:54.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-eced3f61-6841-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume secrets Mar 17 11:24:54.554: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eceeeccf-6841-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-m5qrc" to be "success or failure" Mar 17 11:24:54.614: INFO: Pod "pod-projected-secrets-eceeeccf-6841-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 59.932407ms Mar 17 11:24:56.619: INFO: Pod "pod-projected-secrets-eceeeccf-6841-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064263215s Mar 17 11:24:58.623: INFO: Pod "pod-projected-secrets-eceeeccf-6841-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068274934s STEP: Saw pod success Mar 17 11:24:58.623: INFO: Pod "pod-projected-secrets-eceeeccf-6841-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:24:58.625: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-eceeeccf-6841-11ea-b08f-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 17 11:24:58.655: INFO: Waiting for pod pod-projected-secrets-eceeeccf-6841-11ea-b08f-0242ac11000f to disappear Mar 17 11:24:58.666: INFO: Pod pod-projected-secrets-eceeeccf-6841-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:24:58.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m5qrc" for this suite. Mar 17 11:25:04.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:25:04.769: INFO: namespace: e2e-tests-projected-m5qrc, resource: bindings, ignored listing per whitelist Mar 17 11:25:04.786: INFO: namespace e2e-tests-projected-m5qrc deletion completed in 6.116005827s • [SLOW TEST:10.340 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:25:04.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 11:25:04.854: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f312b8e6-6841-11ea-b08f-0242ac11000f" in namespace "e2e-tests-downward-api-b9wkk" to be "success or failure" Mar 17 11:25:04.871: INFO: Pod "downwardapi-volume-f312b8e6-6841-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.338398ms Mar 17 11:25:06.875: INFO: Pod "downwardapi-volume-f312b8e6-6841-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020948023s Mar 17 11:25:08.879: INFO: Pod "downwardapi-volume-f312b8e6-6841-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025371473s STEP: Saw pod success Mar 17 11:25:08.879: INFO: Pod "downwardapi-volume-f312b8e6-6841-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:25:08.882: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f312b8e6-6841-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 11:25:08.916: INFO: Waiting for pod downwardapi-volume-f312b8e6-6841-11ea-b08f-0242ac11000f to disappear Mar 17 11:25:08.920: INFO: Pod downwardapi-volume-f312b8e6-6841-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:25:08.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-b9wkk" for this suite. Mar 17 11:25:14.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:25:14.999: INFO: namespace: e2e-tests-downward-api-b9wkk, resource: bindings, ignored listing per whitelist Mar 17 11:25:15.016: INFO: namespace e2e-tests-downward-api-b9wkk deletion completed in 6.091849824s • [SLOW TEST:10.229 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:25:15.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 17 11:25:15.091: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:25:21.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-78dh9" for this suite. Mar 17 11:25:27.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:25:27.692: INFO: namespace: e2e-tests-init-container-78dh9, resource: bindings, ignored listing per whitelist Mar 17 11:25:27.744: INFO: namespace e2e-tests-init-container-78dh9 deletion completed in 6.135463206s • [SLOW TEST:12.728 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:25:27.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Mar 17 11:25:31.915: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-00c700fd-6842-11ea-b08f-0242ac11000f", GenerateName:"", Namespace:"e2e-tests-pods-v8zfc", SelfLink:"/api/v1/namespaces/e2e-tests-pods-v8zfc/pods/pod-submit-remove-00c700fd-6842-11ea-b08f-0242ac11000f", UID:"00c8a1f9-6842-11ea-99e8-0242ac110002", ResourceVersion:"317145", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720041127, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"839364932"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-b2n4d", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0024cfe40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b2n4d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002022988), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002493b00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020229d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020229f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0020229f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0020229fc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041127, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041130, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041130, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041127, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.121", StartTime:(*v1.Time)(0xc0025ad980), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0025ad9a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://9e8d4c619cabe766bbe0951b4a8aac21075c164641655a79a1874632d5b459e2"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 17 11:25:36.927: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:25:36.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-v8zfc" for this suite. Mar 17 11:25:42.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:25:43.024: INFO: namespace: e2e-tests-pods-v8zfc, resource: bindings, ignored listing per whitelist Mar 17 11:25:43.024: INFO: namespace e2e-tests-pods-v8zfc deletion completed in 6.089399404s • [SLOW TEST:15.280 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:25:43.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-hpbtq Mar 17 11:25:47.160: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-hpbtq STEP: checking the pod's current state and verifying that restartCount is present Mar 17 11:25:47.163: INFO: Initial restart count of pod liveness-exec is 0 Mar 17 11:26:41.271: INFO: Restart count of pod e2e-tests-container-probe-hpbtq/liveness-exec is now 1 (54.108505052s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:26:41.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-hpbtq" for this suite. Mar 17 11:26:47.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:26:47.388: INFO: namespace: e2e-tests-container-probe-hpbtq, resource: bindings, ignored listing per whitelist Mar 17 11:26:47.422: INFO: namespace e2e-tests-container-probe-hpbtq deletion completed in 6.133908125s • [SLOW TEST:64.397 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:26:47.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-3047fdae-6842-11ea-b08f-0242ac11000f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:26:51.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qgh28" for this suite. Mar 17 11:27:13.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:27:13.641: INFO: namespace: e2e-tests-configmap-qgh28, resource: bindings, ignored listing per whitelist Mar 17 11:27:13.712: INFO: namespace e2e-tests-configmap-qgh28 deletion completed in 22.114386617s • [SLOW TEST:26.289 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:27:13.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-mbdfp STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 17 11:27:13.809: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 17 11:27:39.905: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.57:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-mbdfp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:27:39.905: INFO: >>> kubeConfig: /root/.kube/config I0317 11:27:39.938927 6 log.go:172] (0xc000302840) (0xc001bbae60) Create stream I0317 11:27:39.938968 6 log.go:172] (0xc000302840) (0xc001bbae60) Stream added, broadcasting: 1 I0317 11:27:39.945780 6 log.go:172] (0xc000302840) Reply frame received for 1 I0317 11:27:39.945821 6 log.go:172] (0xc000302840) (0xc000ead360) Create stream I0317 11:27:39.945833 6 log.go:172] (0xc000302840) (0xc000ead360) Stream added, broadcasting: 3 I0317 11:27:39.946603 6 log.go:172] (0xc000302840) Reply frame received for 3 I0317 11:27:39.946632 6 log.go:172] (0xc000302840) (0xc000ead5e0) Create stream I0317 11:27:39.946642 6 log.go:172] (0xc000302840) (0xc000ead5e0) Stream added, broadcasting: 5 I0317 11:27:39.947323 6 log.go:172] (0xc000302840) Reply frame received for 5 I0317 11:27:40.018914 6 log.go:172] (0xc000302840) Data frame received for 3 I0317 11:27:40.018957 6 log.go:172] (0xc000ead360) (3) Data frame handling I0317 11:27:40.018991 6 log.go:172] (0xc000ead360) (3) Data frame sent I0317 11:27:40.019154 6 log.go:172] (0xc000302840) Data frame received for 3 I0317 11:27:40.019181 6 log.go:172] (0xc000ead360) (3) Data frame handling I0317 11:27:40.019221 6 log.go:172] (0xc000302840) Data frame received for 5 I0317 11:27:40.019257 6 log.go:172] (0xc000ead5e0) (5) Data frame handling I0317 11:27:40.021475 6 log.go:172] (0xc000302840) Data frame received for 1 I0317 11:27:40.021514 6 log.go:172] (0xc001bbae60) (1) Data frame handling I0317 11:27:40.021549 6 log.go:172] (0xc001bbae60) (1) Data frame sent I0317 11:27:40.021580 6 log.go:172] (0xc000302840) (0xc001bbae60) Stream removed, broadcasting: 1 I0317 11:27:40.021634 6 log.go:172] (0xc000302840) Go away received I0317 11:27:40.021758 6 log.go:172] (0xc000302840) (0xc001bbae60) Stream removed, broadcasting: 1 I0317 11:27:40.021784 6 log.go:172] (0xc000302840) (0xc000ead360) Stream removed, broadcasting: 3 I0317 11:27:40.021802 6 log.go:172] (0xc000302840) (0xc000ead5e0) Stream removed, broadcasting: 5 Mar 17 11:27:40.021: INFO: Found all expected endpoints: [netserver-0] Mar 17 11:27:40.025: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.123:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-mbdfp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:27:40.025: INFO: >>> kubeConfig: /root/.kube/config I0317 11:27:40.061700 6 log.go:172] (0xc000d6a790) (0xc000ead900) Create stream I0317 11:27:40.061725 6 log.go:172] (0xc000d6a790) (0xc000ead900) Stream added, broadcasting: 1 I0317 11:27:40.065531 6 log.go:172] (0xc000d6a790) Reply frame received for 1 I0317 11:27:40.065599 6 log.go:172] (0xc000d6a790) (0xc001bbaf00) Create stream I0317 11:27:40.065625 6 log.go:172] (0xc000d6a790) (0xc001bbaf00) Stream added, broadcasting: 3 I0317 11:27:40.066683 6 log.go:172] (0xc000d6a790) Reply frame received for 3 I0317 11:27:40.066726 6 log.go:172] (0xc000d6a790) (0xc001466dc0) Create stream I0317 11:27:40.066739 6 log.go:172] (0xc000d6a790) (0xc001466dc0) Stream added, broadcasting: 5 I0317 11:27:40.067606 6 log.go:172] (0xc000d6a790) Reply frame received for 5 I0317 11:27:40.136888 6 log.go:172] (0xc000d6a790) Data frame received for 3 I0317 11:27:40.136941 6 log.go:172] (0xc001bbaf00) (3) Data frame handling I0317 11:27:40.136987 6 log.go:172] (0xc001bbaf00) (3) Data frame sent I0317 11:27:40.137055 6 log.go:172] (0xc000d6a790) Data frame received for 3 I0317 11:27:40.137090 6 log.go:172] (0xc000d6a790) Data frame received for 5 I0317 11:27:40.137235 6 log.go:172] (0xc001466dc0) (5) Data frame handling I0317 11:27:40.137261 6 log.go:172] (0xc001bbaf00) (3) Data frame handling I0317 11:27:40.139413 6 log.go:172] (0xc000d6a790) Data frame received for 1 I0317 11:27:40.139437 6 log.go:172] (0xc000ead900) (1) Data frame handling I0317 11:27:40.139462 6 log.go:172] (0xc000ead900) (1) Data frame sent I0317 11:27:40.139478 6 log.go:172] (0xc000d6a790) (0xc000ead900) Stream removed, broadcasting: 1 I0317 11:27:40.139500 6 log.go:172] (0xc000d6a790) Go away received I0317 11:27:40.139669 6 log.go:172] (0xc000d6a790) (0xc000ead900) Stream removed, broadcasting: 1 I0317 11:27:40.139699 6 log.go:172] (0xc000d6a790) (0xc001bbaf00) Stream removed, broadcasting: 3 I0317 11:27:40.139714 6 log.go:172] (0xc000d6a790) (0xc001466dc0) Stream removed, broadcasting: 5 Mar 17 11:27:40.139: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:27:40.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-mbdfp" for this suite. Mar 17 11:28:04.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:28:04.242: INFO: namespace: e2e-tests-pod-network-test-mbdfp, resource: bindings, ignored listing per whitelist Mar 17 11:28:04.260: INFO: namespace e2e-tests-pod-network-test-mbdfp deletion completed in 24.11697079s • [SLOW TEST:50.548 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:28:04.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-x5xx4 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-x5xx4;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-x5xx4 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-x5xx4;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-x5xx4.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-x5xx4.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-x5xx4.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-x5xx4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-x5xx4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-x5xx4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-x5xx4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-x5xx4.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-x5xx4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 200.151.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.151.200_udp@PTR;check="$$(dig +tcp +noall +answer +search 200.151.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.151.200_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-x5xx4 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-x5xx4;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-x5xx4 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-x5xx4.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-x5xx4.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-x5xx4.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-x5xx4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-x5xx4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-x5xx4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-x5xx4.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-x5xx4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 200.151.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.151.200_udp@PTR;check="$$(dig +tcp +noall +answer +search 200.151.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.151.200_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 17 11:28:10.496: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:10.499: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:10.502: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:10.514: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:10.517: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:10.560: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:10.563: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:10.566: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:10.586: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:10.588: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:10.591: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:10.593: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:10.597: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:10.614: INFO: Lookups using e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-x5xx4 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-x5xx4 jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4 jessie_udp@dns-test-service.e2e-tests-dns-x5xx4.svc jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc] Mar 17 11:28:15.627: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:15.630: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:15.632: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:15.643: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:15.665: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:15.688: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:15.691: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:15.694: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:15.698: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:15.701: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:15.705: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:15.709: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:15.712: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:15.732: INFO: Lookups using e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-x5xx4 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-x5xx4 jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4 jessie_udp@dns-test-service.e2e-tests-dns-x5xx4.svc jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc] Mar 17 11:28:20.619: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:20.623: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:20.626: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:20.639: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:20.642: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:20.680: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:20.683: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:20.687: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:20.689: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:20.692: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:20.695: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:20.698: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:20.701: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:20.719: INFO: Lookups using e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-x5xx4 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-x5xx4 jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4 jessie_udp@dns-test-service.e2e-tests-dns-x5xx4.svc jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc] Mar 17 11:28:25.622: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:25.626: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:25.630: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:25.640: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:25.642: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:25.698: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:25.701: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:25.704: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:25.707: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:25.709: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:25.712: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:25.715: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:25.717: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:25.737: INFO: Lookups using e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-x5xx4 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-x5xx4 jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4 jessie_udp@dns-test-service.e2e-tests-dns-x5xx4.svc jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc] Mar 17 11:28:30.619: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:30.623: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:30.626: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:30.640: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:30.644: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:30.667: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:30.689: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:30.692: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:30.696: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:30.699: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:30.702: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:30.705: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:30.708: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:30.725: INFO: Lookups using e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-x5xx4 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-x5xx4 jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4 jessie_udp@dns-test-service.e2e-tests-dns-x5xx4.svc jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc] Mar 17 11:28:35.665: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:35.669: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:35.672: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:35.684: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:35.707: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:35.729: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:35.732: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:35.735: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:35.738: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4 from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:35.742: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:35.745: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:35.748: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:35.756: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc from pod e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f: the server could not find the requested resource (get pods dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f) Mar 17 11:28:35.792: INFO: Lookups using e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-x5xx4 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-x5xx4 jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4 jessie_udp@dns-test-service.e2e-tests-dns-x5xx4.svc jessie_tcp@dns-test-service.e2e-tests-dns-x5xx4.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-x5xx4.svc] Mar 17 11:28:40.725: INFO: DNS probes using e2e-tests-dns-x5xx4/dns-test-5e1db8c5-6842-11ea-b08f-0242ac11000f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:28:41.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-x5xx4" for this suite. Mar 17 11:28:47.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:28:47.176: INFO: namespace: e2e-tests-dns-x5xx4, resource: bindings, ignored listing per whitelist Mar 17 11:28:47.209: INFO: namespace e2e-tests-dns-x5xx4 deletion completed in 6.160272824s • [SLOW TEST:42.949 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:28:47.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-nl8zh.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-nl8zh.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-nl8zh.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-nl8zh.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-nl8zh.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-nl8zh.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 17 11:28:53.419: INFO: DNS probes using e2e-tests-dns-nl8zh/dns-test-77ab1ff2-6842-11ea-b08f-0242ac11000f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:28:53.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-nl8zh" for this suite. Mar 17 11:28:59.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:28:59.530: INFO: namespace: e2e-tests-dns-nl8zh, resource: bindings, ignored listing per whitelist Mar 17 11:28:59.534: INFO: namespace e2e-tests-dns-nl8zh deletion completed in 6.079628187s • [SLOW TEST:12.325 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:28:59.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Mar 17 11:28:59.637: INFO: Waiting up to 5m0s for pod "client-containers-7f02b7f4-6842-11ea-b08f-0242ac11000f" in namespace "e2e-tests-containers-6gj9w" to be "success or failure" Mar 17 11:28:59.641: INFO: Pod "client-containers-7f02b7f4-6842-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.772523ms Mar 17 11:29:01.647: INFO: Pod "client-containers-7f02b7f4-6842-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010069765s Mar 17 11:29:03.651: INFO: Pod "client-containers-7f02b7f4-6842-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014233194s STEP: Saw pod success Mar 17 11:29:03.651: INFO: Pod "client-containers-7f02b7f4-6842-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:29:03.654: INFO: Trying to get logs from node hunter-worker2 pod client-containers-7f02b7f4-6842-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 11:29:03.672: INFO: Waiting for pod client-containers-7f02b7f4-6842-11ea-b08f-0242ac11000f to disappear Mar 17 11:29:03.676: INFO: Pod client-containers-7f02b7f4-6842-11ea-b08f-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:29:03.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-6gj9w" for this suite. Mar 17 11:29:09.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:29:09.765: INFO: namespace: e2e-tests-containers-6gj9w, resource: bindings, ignored listing per whitelist Mar 17 11:29:09.766: INFO: namespace e2e-tests-containers-6gj9w deletion completed in 6.086326275s • [SLOW TEST:10.231 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:29:09.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-8524882d-6842-11ea-b08f-0242ac11000f STEP: Creating secret with name s-test-opt-upd-852488a4-6842-11ea-b08f-0242ac11000f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-8524882d-6842-11ea-b08f-0242ac11000f STEP: Updating secret s-test-opt-upd-852488a4-6842-11ea-b08f-0242ac11000f STEP: Creating secret with name s-test-opt-create-852488d7-6842-11ea-b08f-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:30:20.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-gjktr" for this suite. Mar 17 11:30:42.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:30:42.372: INFO: namespace: e2e-tests-secrets-gjktr, resource: bindings, ignored listing per whitelist Mar 17 11:30:42.439: INFO: namespace e2e-tests-secrets-gjktr deletion completed in 22.129209105s • [SLOW TEST:92.673 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:30:42.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:30:42.565: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 6.197498ms) Mar 17 11:30:42.568: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.231702ms) Mar 17 11:30:42.572: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.562379ms) Mar 17 11:30:42.575: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.354875ms) Mar 17 11:30:42.578: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.371416ms) Mar 17 11:30:42.582: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.518537ms) Mar 17 11:30:42.585: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.461018ms) Mar 17 11:30:42.589: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.634102ms) Mar 17 11:30:42.593: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.53916ms) Mar 17 11:30:42.597: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.800093ms) Mar 17 11:30:42.601: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.325885ms) Mar 17 11:30:42.605: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.429973ms) Mar 17 11:30:42.608: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.672869ms) Mar 17 11:30:42.612: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.49638ms) Mar 17 11:30:42.615: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.57573ms) Mar 17 11:30:42.619: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.522734ms) Mar 17 11:30:42.623: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.896376ms) Mar 17 11:30:42.627: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.643396ms) Mar 17 11:30:42.630: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.855482ms) Mar 17 11:30:42.634: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.693686ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:30:42.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-45lrs" for this suite. Mar 17 11:30:48.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:30:48.696: INFO: namespace: e2e-tests-proxy-45lrs, resource: bindings, ignored listing per whitelist Mar 17 11:30:48.734: INFO: namespace e2e-tests-proxy-45lrs deletion completed in 6.095986061s • [SLOW TEST:6.294 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:30:48.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:30:48.810: INFO: Creating ReplicaSet my-hostname-basic-c01761bd-6842-11ea-b08f-0242ac11000f Mar 17 11:30:48.845: INFO: Pod name my-hostname-basic-c01761bd-6842-11ea-b08f-0242ac11000f: Found 0 pods out of 1 Mar 17 11:30:53.849: INFO: Pod name my-hostname-basic-c01761bd-6842-11ea-b08f-0242ac11000f: Found 1 pods out of 1 Mar 17 11:30:53.849: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c01761bd-6842-11ea-b08f-0242ac11000f" is running Mar 17 11:30:53.852: INFO: Pod "my-hostname-basic-c01761bd-6842-11ea-b08f-0242ac11000f-ghsg7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-17 11:30:48 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-17 11:30:51 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-17 11:30:51 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-17 11:30:48 +0000 UTC Reason: Message:}]) Mar 17 11:30:53.852: INFO: Trying to dial the pod Mar 17 11:30:58.865: INFO: Controller my-hostname-basic-c01761bd-6842-11ea-b08f-0242ac11000f: Got expected result from replica 1 [my-hostname-basic-c01761bd-6842-11ea-b08f-0242ac11000f-ghsg7]: "my-hostname-basic-c01761bd-6842-11ea-b08f-0242ac11000f-ghsg7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:30:58.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-dsl8r" for this suite. Mar 17 11:31:04.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:31:04.958: INFO: namespace: e2e-tests-replicaset-dsl8r, resource: bindings, ignored listing per whitelist Mar 17 11:31:04.962: INFO: namespace e2e-tests-replicaset-dsl8r deletion completed in 6.09267609s • [SLOW TEST:16.228 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:31:04.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:31:05.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-nztfj" for this suite. Mar 17 11:31:11.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:31:11.168: INFO: namespace: e2e-tests-services-nztfj, resource: bindings, ignored listing per whitelist Mar 17 11:31:11.168: INFO: namespace e2e-tests-services-nztfj deletion completed in 6.0921141s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.205 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:31:11.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 17 11:31:11.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-jgvvp' Mar 17 11:31:11.355: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 17 11:31:11.355: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 17 11:31:11.386: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-8q7xs] Mar 17 11:31:11.386: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-8q7xs" in namespace "e2e-tests-kubectl-jgvvp" to be "running and ready" Mar 17 11:31:11.439: INFO: Pod "e2e-test-nginx-rc-8q7xs": Phase="Pending", Reason="", readiness=false. Elapsed: 52.385925ms Mar 17 11:31:13.442: INFO: Pod "e2e-test-nginx-rc-8q7xs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055830143s Mar 17 11:31:15.447: INFO: Pod "e2e-test-nginx-rc-8q7xs": Phase="Running", Reason="", readiness=true. Elapsed: 4.060509499s Mar 17 11:31:15.447: INFO: Pod "e2e-test-nginx-rc-8q7xs" satisfied condition "running and ready" Mar 17 11:31:15.447: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-8q7xs] Mar 17 11:31:15.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jgvvp' Mar 17 11:31:15.551: INFO: stderr: "" Mar 17 11:31:15.551: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Mar 17 11:31:15.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jgvvp' Mar 17 11:31:15.665: INFO: stderr: "" Mar 17 11:31:15.665: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:31:15.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jgvvp" for this suite. Mar 17 11:31:37.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:31:37.768: INFO: namespace: e2e-tests-kubectl-jgvvp, resource: bindings, ignored listing per whitelist Mar 17 11:31:37.787: INFO: namespace e2e-tests-kubectl-jgvvp deletion completed in 22.118868354s • [SLOW TEST:26.619 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:31:37.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Mar 17 11:31:37.890: INFO: Waiting up to 5m0s for pod "var-expansion-dd5656f3-6842-11ea-b08f-0242ac11000f" in namespace "e2e-tests-var-expansion-lfz7w" to be "success or failure" Mar 17 11:31:37.937: INFO: Pod "var-expansion-dd5656f3-6842-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 47.241695ms Mar 17 11:31:39.948: INFO: Pod "var-expansion-dd5656f3-6842-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058637116s Mar 17 11:31:41.953: INFO: Pod "var-expansion-dd5656f3-6842-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063194999s STEP: Saw pod success Mar 17 11:31:41.953: INFO: Pod "var-expansion-dd5656f3-6842-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:31:41.956: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-dd5656f3-6842-11ea-b08f-0242ac11000f container dapi-container: STEP: delete the pod Mar 17 11:31:41.980: INFO: Waiting for pod var-expansion-dd5656f3-6842-11ea-b08f-0242ac11000f to disappear Mar 17 11:31:42.044: INFO: Pod var-expansion-dd5656f3-6842-11ea-b08f-0242ac11000f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:31:42.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-lfz7w" for this suite. Mar 17 11:31:48.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:31:48.087: INFO: namespace: e2e-tests-var-expansion-lfz7w, resource: bindings, ignored listing per whitelist Mar 17 11:31:48.130: INFO: namespace e2e-tests-var-expansion-lfz7w deletion completed in 6.081240703s • [SLOW TEST:10.342 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:31:48.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Mar 17 11:31:52.246: INFO: Pod pod-hostip-e380290a-6842-11ea-b08f-0242ac11000f has hostIP: 172.17.0.3 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:31:52.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-zd8v8" for this suite. Mar 17 11:32:14.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:32:14.311: INFO: namespace: e2e-tests-pods-zd8v8, resource: bindings, ignored listing per whitelist Mar 17 11:32:14.340: INFO: namespace e2e-tests-pods-zd8v8 deletion completed in 22.089638074s • [SLOW TEST:26.210 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:32:14.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-ckp5p STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 17 11:32:14.424: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 17 11:32:38.618: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.64:8080/dial?request=hostName&protocol=udp&host=10.244.1.128&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-ckp5p PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:32:38.618: INFO: >>> kubeConfig: /root/.kube/config I0317 11:32:38.649823 6 log.go:172] (0xc001f622c0) (0xc0022200a0) Create stream I0317 11:32:38.649852 6 log.go:172] (0xc001f622c0) (0xc0022200a0) Stream added, broadcasting: 1 I0317 11:32:38.651216 6 log.go:172] (0xc001f622c0) Reply frame received for 1 I0317 11:32:38.651263 6 log.go:172] (0xc001f622c0) (0xc001ce0a00) Create stream I0317 11:32:38.651276 6 log.go:172] (0xc001f622c0) (0xc001ce0a00) Stream added, broadcasting: 3 I0317 11:32:38.652062 6 log.go:172] (0xc001f622c0) Reply frame received for 3 I0317 11:32:38.652099 6 log.go:172] (0xc001f622c0) (0xc002220140) Create stream I0317 11:32:38.652113 6 log.go:172] (0xc001f622c0) (0xc002220140) Stream added, broadcasting: 5 I0317 11:32:38.652878 6 log.go:172] (0xc001f622c0) Reply frame received for 5 I0317 11:32:38.748324 6 log.go:172] (0xc001f622c0) Data frame received for 3 I0317 11:32:38.748353 6 log.go:172] (0xc001ce0a00) (3) Data frame handling I0317 11:32:38.748377 6 log.go:172] (0xc001ce0a00) (3) Data frame sent I0317 11:32:38.749677 6 log.go:172] (0xc001f622c0) Data frame received for 5 I0317 11:32:38.749711 6 log.go:172] (0xc002220140) (5) Data frame handling I0317 11:32:38.749979 6 log.go:172] (0xc001f622c0) Data frame received for 3 I0317 11:32:38.750010 6 log.go:172] (0xc001ce0a00) (3) Data frame handling I0317 11:32:38.751386 6 log.go:172] (0xc001f622c0) Data frame received for 1 I0317 11:32:38.751404 6 log.go:172] (0xc0022200a0) (1) Data frame handling I0317 11:32:38.751416 6 log.go:172] (0xc0022200a0) (1) Data frame sent I0317 11:32:38.751428 6 log.go:172] (0xc001f622c0) (0xc0022200a0) Stream removed, broadcasting: 1 I0317 11:32:38.751504 6 log.go:172] (0xc001f622c0) (0xc0022200a0) Stream removed, broadcasting: 1 I0317 11:32:38.751524 6 log.go:172] (0xc001f622c0) (0xc001ce0a00) Stream removed, broadcasting: 3 I0317 11:32:38.751547 6 log.go:172] (0xc001f622c0) (0xc002220140) Stream removed, broadcasting: 5 Mar 17 11:32:38.751: INFO: Waiting for endpoints: map[] I0317 11:32:38.751639 6 log.go:172] (0xc001f622c0) Go away received Mar 17 11:32:38.754: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.64:8080/dial?request=hostName&protocol=udp&host=10.244.2.63&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-ckp5p PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:32:38.754: INFO: >>> kubeConfig: /root/.kube/config I0317 11:32:38.789057 6 log.go:172] (0xc001fd22c0) (0xc001ce10e0) Create stream I0317 11:32:38.789081 6 log.go:172] (0xc001fd22c0) (0xc001ce10e0) Stream added, broadcasting: 1 I0317 11:32:38.790978 6 log.go:172] (0xc001fd22c0) Reply frame received for 1 I0317 11:32:38.791032 6 log.go:172] (0xc001fd22c0) (0xc0008a7360) Create stream I0317 11:32:38.791051 6 log.go:172] (0xc001fd22c0) (0xc0008a7360) Stream added, broadcasting: 3 I0317 11:32:38.792009 6 log.go:172] (0xc001fd22c0) Reply frame received for 3 I0317 11:32:38.792052 6 log.go:172] (0xc001fd22c0) (0xc0021d4460) Create stream I0317 11:32:38.792067 6 log.go:172] (0xc001fd22c0) (0xc0021d4460) Stream added, broadcasting: 5 I0317 11:32:38.792898 6 log.go:172] (0xc001fd22c0) Reply frame received for 5 I0317 11:32:38.846400 6 log.go:172] (0xc001fd22c0) Data frame received for 3 I0317 11:32:38.846430 6 log.go:172] (0xc0008a7360) (3) Data frame handling I0317 11:32:38.846449 6 log.go:172] (0xc0008a7360) (3) Data frame sent I0317 11:32:38.847108 6 log.go:172] (0xc001fd22c0) Data frame received for 5 I0317 11:32:38.847133 6 log.go:172] (0xc001fd22c0) Data frame received for 3 I0317 11:32:38.847166 6 log.go:172] (0xc0008a7360) (3) Data frame handling I0317 11:32:38.847181 6 log.go:172] (0xc0021d4460) (5) Data frame handling I0317 11:32:38.848662 6 log.go:172] (0xc001fd22c0) Data frame received for 1 I0317 11:32:38.848712 6 log.go:172] (0xc001ce10e0) (1) Data frame handling I0317 11:32:38.848750 6 log.go:172] (0xc001ce10e0) (1) Data frame sent I0317 11:32:38.848794 6 log.go:172] (0xc001fd22c0) (0xc001ce10e0) Stream removed, broadcasting: 1 I0317 11:32:38.848888 6 log.go:172] (0xc001fd22c0) Go away received I0317 11:32:38.848966 6 log.go:172] (0xc001fd22c0) (0xc001ce10e0) Stream removed, broadcasting: 1 I0317 11:32:38.848989 6 log.go:172] (0xc001fd22c0) (0xc0008a7360) Stream removed, broadcasting: 3 I0317 11:32:38.849015 6 log.go:172] (0xc001fd22c0) (0xc0021d4460) Stream removed, broadcasting: 5 Mar 17 11:32:38.849: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:32:38.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-ckp5p" for this suite. Mar 17 11:33:02.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:33:02.923: INFO: namespace: e2e-tests-pod-network-test-ckp5p, resource: bindings, ignored listing per whitelist Mar 17 11:33:02.947: INFO: namespace e2e-tests-pod-network-test-ckp5p deletion completed in 24.093369845s • [SLOW TEST:48.607 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:33:02.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:33:03.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-fvmxz" for this suite. Mar 17 11:33:09.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:33:09.225: INFO: namespace: e2e-tests-kubelet-test-fvmxz, resource: bindings, ignored listing per whitelist Mar 17 11:33:09.227: INFO: namespace e2e-tests-kubelet-test-fvmxz deletion completed in 6.084490019s • [SLOW TEST:6.280 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:33:09.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 17 11:33:09.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-d2d26' Mar 17 11:33:09.447: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 17 11:33:09.447: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Mar 17 11:33:09.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-d2d26' Mar 17 11:33:09.552: INFO: stderr: "" Mar 17 11:33:09.552: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:33:09.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d2d26" for this suite. Mar 17 11:33:31.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:33:31.622: INFO: namespace: e2e-tests-kubectl-d2d26, resource: bindings, ignored listing per whitelist Mar 17 11:33:31.665: INFO: namespace e2e-tests-kubectl-d2d26 deletion completed in 22.10990822s • [SLOW TEST:22.438 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:33:31.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-stj5f/configmap-test-21378b01-6843-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 11:33:31.788: INFO: Waiting up to 5m0s for pod "pod-configmaps-2138445a-6843-11ea-b08f-0242ac11000f" in namespace "e2e-tests-configmap-stj5f" to be "success or failure" Mar 17 11:33:31.801: INFO: Pod "pod-configmaps-2138445a-6843-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.90431ms Mar 17 11:33:33.805: INFO: Pod "pod-configmaps-2138445a-6843-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01670971s Mar 17 11:33:35.809: INFO: Pod "pod-configmaps-2138445a-6843-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021332722s STEP: Saw pod success Mar 17 11:33:35.809: INFO: Pod "pod-configmaps-2138445a-6843-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:33:35.813: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-2138445a-6843-11ea-b08f-0242ac11000f container env-test: STEP: delete the pod Mar 17 11:33:35.832: INFO: Waiting for pod pod-configmaps-2138445a-6843-11ea-b08f-0242ac11000f to disappear Mar 17 11:33:35.837: INFO: Pod pod-configmaps-2138445a-6843-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:33:35.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-stj5f" for this suite. Mar 17 11:33:41.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:33:41.909: INFO: namespace: e2e-tests-configmap-stj5f, resource: bindings, ignored listing per whitelist Mar 17 11:33:41.943: INFO: namespace e2e-tests-configmap-stj5f deletion completed in 6.10379293s • [SLOW TEST:10.278 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:33:41.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Mar 17 11:33:42.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 17 11:33:42.125: INFO: stderr: "" Mar 17 11:33:42.125: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:33:42.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9kf59" for this suite. Mar 17 11:33:48.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:33:48.188: INFO: namespace: e2e-tests-kubectl-9kf59, resource: bindings, ignored listing per whitelist Mar 17 11:33:48.216: INFO: namespace e2e-tests-kubectl-9kf59 deletion completed in 6.087503894s • [SLOW TEST:6.272 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:33:48.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:33:48.366: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2b189c2d-6843-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00263f26a), BlockOwnerDeletion:(*bool)(0xc00263f26b)}} Mar 17 11:33:48.376: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2b178d69-6843-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001dc09d2), BlockOwnerDeletion:(*bool)(0xc001dc09d3)}} Mar 17 11:33:48.402: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2b17ffcb-6843-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00263f502), BlockOwnerDeletion:(*bool)(0xc00263f503)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:33:53.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-rd5sx" for this suite. Mar 17 11:33:59.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:33:59.554: INFO: namespace: e2e-tests-gc-rd5sx, resource: bindings, ignored listing per whitelist Mar 17 11:33:59.586: INFO: namespace e2e-tests-gc-rd5sx deletion completed in 6.158713246s • [SLOW TEST:11.370 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:33:59.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 17 11:33:59.681: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 17 11:33:59.698: INFO: Waiting for terminating namespaces to be deleted... Mar 17 11:33:59.700: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 17 11:33:59.707: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Mar 17 11:33:59.707: INFO: Container kube-proxy ready: true, restart count 0 Mar 17 11:33:59.707: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 17 11:33:59.707: INFO: Container kindnet-cni ready: true, restart count 0 Mar 17 11:33:59.707: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 17 11:33:59.707: INFO: Container coredns ready: true, restart count 0 Mar 17 11:33:59.707: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 17 11:33:59.712: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 17 11:33:59.712: INFO: Container kindnet-cni ready: true, restart count 0 Mar 17 11:33:59.712: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 17 11:33:59.712: INFO: Container coredns ready: true, restart count 0 Mar 17 11:33:59.712: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 17 11:33:59.712: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fd142fde285fbe], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:34:00.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-x2vnd" for this suite. Mar 17 11:34:06.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:34:06.758: INFO: namespace: e2e-tests-sched-pred-x2vnd, resource: bindings, ignored listing per whitelist Mar 17 11:34:06.823: INFO: namespace e2e-tests-sched-pred-x2vnd deletion completed in 6.088980809s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.237 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:34:06.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:34:06.909: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 17 11:34:11.914: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 17 11:34:11.914: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 17 11:34:13.918: INFO: Creating deployment "test-rollover-deployment" Mar 17 11:34:13.927: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 17 11:34:15.934: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 17 11:34:15.942: INFO: Ensure that both replica sets have 1 created replica Mar 17 11:34:15.947: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 17 11:34:15.953: INFO: Updating deployment test-rollover-deployment Mar 17 11:34:15.953: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 17 11:34:17.979: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 17 11:34:17.985: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 17 11:34:17.991: INFO: all replica sets need to contain the pod-template-hash label Mar 17 11:34:17.992: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041656, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 17 11:34:19.999: INFO: all replica sets need to contain the pod-template-hash label Mar 17 11:34:19.999: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041659, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 17 11:34:21.999: INFO: all replica sets need to contain the pod-template-hash label Mar 17 11:34:21.999: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041659, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 17 11:34:24.000: INFO: all replica sets need to contain the pod-template-hash label Mar 17 11:34:24.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041659, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 17 11:34:26.000: INFO: all replica sets need to contain the pod-template-hash label Mar 17 11:34:26.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041659, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 17 11:34:28.000: INFO: all replica sets need to contain the pod-template-hash label Mar 17 11:34:28.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041659, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720041653, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 17 11:34:30.000: INFO: Mar 17 11:34:30.000: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 17 11:34:30.009: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-sgq54,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sgq54/deployments/test-rollover-deployment,UID:3a587cf5-6843-11ea-99e8-0242ac110002,ResourceVersion:318881,Generation:2,CreationTimestamp:2020-03-17 11:34:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-17 11:34:13 +0000 UTC 2020-03-17 11:34:13 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-17 11:34:29 +0000 UTC 2020-03-17 11:34:13 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 17 11:34:30.012: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-sgq54,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sgq54/replicasets/test-rollover-deployment-5b8479fdb6,UID:3b8f1743-6843-11ea-99e8-0242ac110002,ResourceVersion:318872,Generation:2,CreationTimestamp:2020-03-17 11:34:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 3a587cf5-6843-11ea-99e8-0242ac110002 0xc00096efb7 0xc00096efb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 17 11:34:30.012: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 17 11:34:30.012: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-sgq54,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sgq54/replicasets/test-rollover-controller,UID:36295d03-6843-11ea-99e8-0242ac110002,ResourceVersion:318880,Generation:2,CreationTimestamp:2020-03-17 11:34:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 3a587cf5-6843-11ea-99e8-0242ac110002 0xc00096ecc7 0xc00096ecc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 17 11:34:30.013: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-sgq54,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sgq54/replicasets/test-rollover-deployment-58494b7559,UID:3a5b2351-6843-11ea-99e8-0242ac110002,ResourceVersion:318840,Generation:2,CreationTimestamp:2020-03-17 11:34:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 3a587cf5-6843-11ea-99e8-0242ac110002 0xc00096edf7 0xc00096edf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 17 11:34:30.016: INFO: Pod "test-rollover-deployment-5b8479fdb6-zhndk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-zhndk,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-sgq54,SelfLink:/api/v1/namespaces/e2e-tests-deployment-sgq54/pods/test-rollover-deployment-5b8479fdb6-zhndk,UID:3b99cdde-6843-11ea-99e8-0242ac110002,ResourceVersion:318850,Generation:0,CreationTimestamp:2020-03-17 11:34:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 3b8f1743-6843-11ea-99e8-0242ac110002 0xc0011fd8b7 0xc0011fd8b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fz28t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fz28t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-fz28t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0011fd9e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0011fda10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:34:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:34:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:34:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:34:16 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.67,StartTime:2020-03-17 11:34:16 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-17 11:34:18 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://1904a2cce2be0b1a4c5087982a5b2044d0ff4fd7d3a98316c9fb7026ae21a805}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:34:30.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-sgq54" for this suite. Mar 17 11:34:36.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:34:36.061: INFO: namespace: e2e-tests-deployment-sgq54, resource: bindings, ignored listing per whitelist Mar 17 11:34:36.108: INFO: namespace e2e-tests-deployment-sgq54 deletion completed in 6.089025169s • [SLOW TEST:29.285 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:34:36.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-47a68144-6843-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume secrets Mar 17 11:34:36.253: INFO: Waiting up to 5m0s for pod "pod-secrets-47a73911-6843-11ea-b08f-0242ac11000f" in namespace "e2e-tests-secrets-fhv7k" to be "success or failure" Mar 17 11:34:36.271: INFO: Pod "pod-secrets-47a73911-6843-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.307358ms Mar 17 11:34:38.275: INFO: Pod "pod-secrets-47a73911-6843-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021405954s Mar 17 11:34:40.279: INFO: Pod "pod-secrets-47a73911-6843-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025598838s STEP: Saw pod success Mar 17 11:34:40.279: INFO: Pod "pod-secrets-47a73911-6843-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:34:40.282: INFO: Trying to get logs from node hunter-worker pod pod-secrets-47a73911-6843-11ea-b08f-0242ac11000f container secret-env-test: STEP: delete the pod Mar 17 11:34:40.307: INFO: Waiting for pod pod-secrets-47a73911-6843-11ea-b08f-0242ac11000f to disappear Mar 17 11:34:40.311: INFO: Pod pod-secrets-47a73911-6843-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:34:40.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-fhv7k" for this suite. Mar 17 11:34:46.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:34:46.439: INFO: namespace: e2e-tests-secrets-fhv7k, resource: bindings, ignored listing per whitelist Mar 17 11:34:46.453: INFO: namespace e2e-tests-secrets-fhv7k deletion completed in 6.13842192s • [SLOW TEST:10.345 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:34:46.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-bd25p [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-bd25p STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-bd25p STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-bd25p STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-bd25p Mar 17 11:34:50.686: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-bd25p, name: ss-0, uid: 4e02a60a-6843-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. Mar 17 11:34:51.245: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-bd25p, name: ss-0, uid: 4e02a60a-6843-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Mar 17 11:34:51.306: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-bd25p, name: ss-0, uid: 4e02a60a-6843-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Mar 17 11:34:51.324: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-bd25p STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-bd25p STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-bd25p and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 17 11:35:01.469: INFO: Deleting all statefulset in ns e2e-tests-statefulset-bd25p Mar 17 11:35:01.472: INFO: Scaling statefulset ss to 0 Mar 17 11:35:11.490: INFO: Waiting for statefulset status.replicas updated to 0 Mar 17 11:35:11.493: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:35:11.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-bd25p" for this suite. Mar 17 11:35:17.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:35:17.564: INFO: namespace: e2e-tests-statefulset-bd25p, resource: bindings, ignored listing per whitelist Mar 17 11:35:17.638: INFO: namespace e2e-tests-statefulset-bd25p deletion completed in 6.100555785s • [SLOW TEST:31.185 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:35:17.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-60616007-6843-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 11:35:17.752: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-60635c59-6843-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-q4vf5" to be "success or failure" Mar 17 11:35:17.766: INFO: Pod "pod-projected-configmaps-60635c59-6843-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.850424ms Mar 17 11:35:19.770: INFO: Pod "pod-projected-configmaps-60635c59-6843-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018007673s Mar 17 11:35:21.774: INFO: Pod "pod-projected-configmaps-60635c59-6843-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022087219s STEP: Saw pod success Mar 17 11:35:21.774: INFO: Pod "pod-projected-configmaps-60635c59-6843-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:35:21.777: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-60635c59-6843-11ea-b08f-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 17 11:35:21.798: INFO: Waiting for pod pod-projected-configmaps-60635c59-6843-11ea-b08f-0242ac11000f to disappear Mar 17 11:35:21.803: INFO: Pod pod-projected-configmaps-60635c59-6843-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:35:21.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q4vf5" for this suite. Mar 17 11:35:27.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:35:27.830: INFO: namespace: e2e-tests-projected-q4vf5, resource: bindings, ignored listing per whitelist Mar 17 11:35:27.899: INFO: namespace e2e-tests-projected-q4vf5 deletion completed in 6.093432369s • [SLOW TEST:10.260 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:35:27.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-667fbf0a-6843-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume secrets Mar 17 11:35:28.044: INFO: Waiting up to 5m0s for pod "pod-secrets-6682239d-6843-11ea-b08f-0242ac11000f" in namespace "e2e-tests-secrets-7kt89" to be "success or failure" Mar 17 11:35:28.049: INFO: Pod "pod-secrets-6682239d-6843-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.2056ms Mar 17 11:35:30.065: INFO: Pod "pod-secrets-6682239d-6843-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021354754s Mar 17 11:35:32.069: INFO: Pod "pod-secrets-6682239d-6843-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025344432s STEP: Saw pod success Mar 17 11:35:32.069: INFO: Pod "pod-secrets-6682239d-6843-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:35:32.072: INFO: Trying to get logs from node hunter-worker pod pod-secrets-6682239d-6843-11ea-b08f-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 17 11:35:32.127: INFO: Waiting for pod pod-secrets-6682239d-6843-11ea-b08f-0242ac11000f to disappear Mar 17 11:35:32.139: INFO: Pod pod-secrets-6682239d-6843-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:35:32.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7kt89" for this suite. Mar 17 11:35:38.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:35:38.230: INFO: namespace: e2e-tests-secrets-7kt89, resource: bindings, ignored listing per whitelist Mar 17 11:35:38.254: INFO: namespace e2e-tests-secrets-7kt89 deletion completed in 6.111133894s • [SLOW TEST:10.354 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:35:38.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 17 11:35:42.887: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6ca9d8d8-6843-11ea-b08f-0242ac11000f" Mar 17 11:35:42.887: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6ca9d8d8-6843-11ea-b08f-0242ac11000f" in namespace "e2e-tests-pods-9sj8v" to be "terminated due to deadline exceeded" Mar 17 11:35:42.894: INFO: Pod "pod-update-activedeadlineseconds-6ca9d8d8-6843-11ea-b08f-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 7.087579ms Mar 17 11:35:44.898: INFO: Pod "pod-update-activedeadlineseconds-6ca9d8d8-6843-11ea-b08f-0242ac11000f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.011189859s Mar 17 11:35:44.898: INFO: Pod "pod-update-activedeadlineseconds-6ca9d8d8-6843-11ea-b08f-0242ac11000f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:35:44.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-9sj8v" for this suite. Mar 17 11:35:50.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:35:50.977: INFO: namespace: e2e-tests-pods-9sj8v, resource: bindings, ignored listing per whitelist Mar 17 11:35:50.999: INFO: namespace e2e-tests-pods-9sj8v deletion completed in 6.096031309s • [SLOW TEST:12.744 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:35:50.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-744490cd-6843-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 11:35:51.125: INFO: Waiting up to 5m0s for pod "pod-configmaps-74480736-6843-11ea-b08f-0242ac11000f" in namespace "e2e-tests-configmap-k96sl" to be "success or failure" Mar 17 11:35:51.127: INFO: Pod "pod-configmaps-74480736-6843-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293789ms Mar 17 11:35:53.131: INFO: Pod "pod-configmaps-74480736-6843-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006446378s Mar 17 11:35:55.135: INFO: Pod "pod-configmaps-74480736-6843-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010598321s STEP: Saw pod success Mar 17 11:35:55.136: INFO: Pod "pod-configmaps-74480736-6843-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:35:55.138: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-74480736-6843-11ea-b08f-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 17 11:35:55.160: INFO: Waiting for pod pod-configmaps-74480736-6843-11ea-b08f-0242ac11000f to disappear Mar 17 11:35:55.203: INFO: Pod pod-configmaps-74480736-6843-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:35:55.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-k96sl" for this suite. Mar 17 11:36:01.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:36:01.289: INFO: namespace: e2e-tests-configmap-k96sl, resource: bindings, ignored listing per whitelist Mar 17 11:36:01.326: INFO: namespace e2e-tests-configmap-k96sl deletion completed in 6.118663493s • [SLOW TEST:10.327 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:36:01.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-7a729429-6843-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 11:36:01.489: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7a7431c9-6843-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-9gn8t" to be "success or failure" Mar 17 11:36:01.493: INFO: Pod "pod-projected-configmaps-7a7431c9-6843-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.98472ms Mar 17 11:36:03.508: INFO: Pod "pod-projected-configmaps-7a7431c9-6843-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018465853s Mar 17 11:36:05.521: INFO: Pod "pod-projected-configmaps-7a7431c9-6843-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031822202s STEP: Saw pod success Mar 17 11:36:05.521: INFO: Pod "pod-projected-configmaps-7a7431c9-6843-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:36:05.523: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-7a7431c9-6843-11ea-b08f-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 17 11:36:05.542: INFO: Waiting for pod pod-projected-configmaps-7a7431c9-6843-11ea-b08f-0242ac11000f to disappear Mar 17 11:36:05.548: INFO: Pod pod-projected-configmaps-7a7431c9-6843-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:36:05.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9gn8t" for this suite. Mar 17 11:36:11.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:36:11.619: INFO: namespace: e2e-tests-projected-9gn8t, resource: bindings, ignored listing per whitelist Mar 17 11:36:11.665: INFO: namespace e2e-tests-projected-9gn8t deletion completed in 6.114168856s • [SLOW TEST:10.339 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:36:11.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:36:18.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-sfzfs" for this suite. Mar 17 11:36:24.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:36:24.057: INFO: namespace: e2e-tests-namespaces-sfzfs, resource: bindings, ignored listing per whitelist Mar 17 11:36:24.119: INFO: namespace e2e-tests-namespaces-sfzfs deletion completed in 6.107172294s STEP: Destroying namespace "e2e-tests-nsdeletetest-nn6bc" for this suite. Mar 17 11:36:24.122: INFO: Namespace e2e-tests-nsdeletetest-nn6bc was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-7b4r2" for this suite. Mar 17 11:36:30.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:36:30.157: INFO: namespace: e2e-tests-nsdeletetest-7b4r2, resource: bindings, ignored listing per whitelist Mar 17 11:36:30.214: INFO: namespace e2e-tests-nsdeletetest-7b4r2 deletion completed in 6.09251818s • [SLOW TEST:18.549 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:36:30.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0317 11:37:10.366183 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 17 11:37:10.366: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:37:10.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-5cp8g" for this suite. Mar 17 11:37:20.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:37:20.488: INFO: namespace: e2e-tests-gc-5cp8g, resource: bindings, ignored listing per whitelist Mar 17 11:37:20.488: INFO: namespace e2e-tests-gc-5cp8g deletion completed in 10.118576808s • [SLOW TEST:50.274 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:37:20.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-a99c8fe4-6843-11ea-b08f-0242ac11000f STEP: Creating secret with name secret-projected-all-test-volume-a99c8f95-6843-11ea-b08f-0242ac11000f STEP: Creating a pod to test Check all projections for projected volume plugin Mar 17 11:37:20.617: INFO: Waiting up to 5m0s for pod "projected-volume-a99c8f24-6843-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-vjhbf" to be "success or failure" Mar 17 11:37:20.621: INFO: Pod "projected-volume-a99c8f24-6843-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.676186ms Mar 17 11:37:22.625: INFO: Pod "projected-volume-a99c8f24-6843-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007267577s Mar 17 11:37:24.629: INFO: Pod "projected-volume-a99c8f24-6843-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011513588s STEP: Saw pod success Mar 17 11:37:24.629: INFO: Pod "projected-volume-a99c8f24-6843-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:37:24.632: INFO: Trying to get logs from node hunter-worker pod projected-volume-a99c8f24-6843-11ea-b08f-0242ac11000f container projected-all-volume-test: STEP: delete the pod Mar 17 11:37:24.653: INFO: Waiting for pod projected-volume-a99c8f24-6843-11ea-b08f-0242ac11000f to disappear Mar 17 11:37:24.657: INFO: Pod projected-volume-a99c8f24-6843-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:37:24.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vjhbf" for this suite. Mar 17 11:37:30.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:37:30.779: INFO: namespace: e2e-tests-projected-vjhbf, resource: bindings, ignored listing per whitelist Mar 17 11:37:30.787: INFO: namespace e2e-tests-projected-vjhbf deletion completed in 6.125897363s • [SLOW TEST:10.299 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:37:30.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-kbmk5 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 17 11:37:30.930: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 17 11:37:57.044: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.78:8080/dial?request=hostName&protocol=http&host=10.244.2.77&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-kbmk5 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:37:57.044: INFO: >>> kubeConfig: /root/.kube/config I0317 11:37:57.084369 6 log.go:172] (0xc0013902c0) (0xc0022570e0) Create stream I0317 11:37:57.084407 6 log.go:172] (0xc0013902c0) (0xc0022570e0) Stream added, broadcasting: 1 I0317 11:37:57.086438 6 log.go:172] (0xc0013902c0) Reply frame received for 1 I0317 11:37:57.086465 6 log.go:172] (0xc0013902c0) (0xc002257180) Create stream I0317 11:37:57.086475 6 log.go:172] (0xc0013902c0) (0xc002257180) Stream added, broadcasting: 3 I0317 11:37:57.087571 6 log.go:172] (0xc0013902c0) Reply frame received for 3 I0317 11:37:57.087613 6 log.go:172] (0xc0013902c0) (0xc001466c80) Create stream I0317 11:37:57.087629 6 log.go:172] (0xc0013902c0) (0xc001466c80) Stream added, broadcasting: 5 I0317 11:37:57.088676 6 log.go:172] (0xc0013902c0) Reply frame received for 5 I0317 11:37:57.188370 6 log.go:172] (0xc0013902c0) Data frame received for 3 I0317 11:37:57.188397 6 log.go:172] (0xc002257180) (3) Data frame handling I0317 11:37:57.188422 6 log.go:172] (0xc002257180) (3) Data frame sent I0317 11:37:57.189561 6 log.go:172] (0xc0013902c0) Data frame received for 5 I0317 11:37:57.189593 6 log.go:172] (0xc001466c80) (5) Data frame handling I0317 11:37:57.189676 6 log.go:172] (0xc0013902c0) Data frame received for 3 I0317 11:37:57.189749 6 log.go:172] (0xc002257180) (3) Data frame handling I0317 11:37:57.191532 6 log.go:172] (0xc0013902c0) Data frame received for 1 I0317 11:37:57.191562 6 log.go:172] (0xc0022570e0) (1) Data frame handling I0317 11:37:57.191584 6 log.go:172] (0xc0022570e0) (1) Data frame sent I0317 11:37:57.191604 6 log.go:172] (0xc0013902c0) (0xc0022570e0) Stream removed, broadcasting: 1 I0317 11:37:57.191628 6 log.go:172] (0xc0013902c0) Go away received I0317 11:37:57.191697 6 log.go:172] (0xc0013902c0) (0xc0022570e0) Stream removed, broadcasting: 1 I0317 11:37:57.191714 6 log.go:172] (0xc0013902c0) (0xc002257180) Stream removed, broadcasting: 3 I0317 11:37:57.191725 6 log.go:172] (0xc0013902c0) (0xc001466c80) Stream removed, broadcasting: 5 Mar 17 11:37:57.191: INFO: Waiting for endpoints: map[] Mar 17 11:37:57.195: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.78:8080/dial?request=hostName&protocol=http&host=10.244.1.141&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-kbmk5 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:37:57.195: INFO: >>> kubeConfig: /root/.kube/config I0317 11:37:57.223946 6 log.go:172] (0xc0003029a0) (0xc001825a40) Create stream I0317 11:37:57.223971 6 log.go:172] (0xc0003029a0) (0xc001825a40) Stream added, broadcasting: 1 I0317 11:37:57.233753 6 log.go:172] (0xc0003029a0) Reply frame received for 1 I0317 11:37:57.233810 6 log.go:172] (0xc0003029a0) (0xc00156b4a0) Create stream I0317 11:37:57.233829 6 log.go:172] (0xc0003029a0) (0xc00156b4a0) Stream added, broadcasting: 3 I0317 11:37:57.234801 6 log.go:172] (0xc0003029a0) Reply frame received for 3 I0317 11:37:57.234839 6 log.go:172] (0xc0003029a0) (0xc001825ae0) Create stream I0317 11:37:57.234860 6 log.go:172] (0xc0003029a0) (0xc001825ae0) Stream added, broadcasting: 5 I0317 11:37:57.235589 6 log.go:172] (0xc0003029a0) Reply frame received for 5 I0317 11:37:57.304725 6 log.go:172] (0xc0003029a0) Data frame received for 3 I0317 11:37:57.304749 6 log.go:172] (0xc00156b4a0) (3) Data frame handling I0317 11:37:57.304763 6 log.go:172] (0xc00156b4a0) (3) Data frame sent I0317 11:37:57.305844 6 log.go:172] (0xc0003029a0) Data frame received for 5 I0317 11:37:57.305889 6 log.go:172] (0xc001825ae0) (5) Data frame handling I0317 11:37:57.305925 6 log.go:172] (0xc0003029a0) Data frame received for 3 I0317 11:37:57.305947 6 log.go:172] (0xc00156b4a0) (3) Data frame handling I0317 11:37:57.307351 6 log.go:172] (0xc0003029a0) Data frame received for 1 I0317 11:37:57.307375 6 log.go:172] (0xc001825a40) (1) Data frame handling I0317 11:37:57.307398 6 log.go:172] (0xc001825a40) (1) Data frame sent I0317 11:37:57.307427 6 log.go:172] (0xc0003029a0) (0xc001825a40) Stream removed, broadcasting: 1 I0317 11:37:57.307554 6 log.go:172] (0xc0003029a0) Go away received I0317 11:37:57.307605 6 log.go:172] (0xc0003029a0) (0xc001825a40) Stream removed, broadcasting: 1 I0317 11:37:57.307638 6 log.go:172] (0xc0003029a0) (0xc00156b4a0) Stream removed, broadcasting: 3 I0317 11:37:57.307686 6 log.go:172] (0xc0003029a0) (0xc001825ae0) Stream removed, broadcasting: 5 Mar 17 11:37:57.307: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:37:57.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-kbmk5" for this suite. Mar 17 11:38:13.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:38:13.340: INFO: namespace: e2e-tests-pod-network-test-kbmk5, resource: bindings, ignored listing per whitelist Mar 17 11:38:13.409: INFO: namespace e2e-tests-pod-network-test-kbmk5 deletion completed in 16.09720872s • [SLOW TEST:42.621 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:38:13.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 17 11:38:13.521: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:38:18.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-9jl2n" for this suite. Mar 17 11:38:24.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:38:24.226: INFO: namespace: e2e-tests-init-container-9jl2n, resource: bindings, ignored listing per whitelist Mar 17 11:38:24.291: INFO: namespace e2e-tests-init-container-9jl2n deletion completed in 6.099772192s • [SLOW TEST:10.882 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:38:24.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:38:28.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-m2bnv" for this suite. Mar 17 11:38:34.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:38:34.533: INFO: namespace: e2e-tests-kubelet-test-m2bnv, resource: bindings, ignored listing per whitelist Mar 17 11:38:34.569: INFO: namespace e2e-tests-kubelet-test-m2bnv deletion completed in 6.103903736s • [SLOW TEST:10.278 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:38:34.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:38:34.692: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:38:38.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-t8t9p" for this suite. Mar 17 11:39:24.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:39:24.866: INFO: namespace: e2e-tests-pods-t8t9p, resource: bindings, ignored listing per whitelist Mar 17 11:39:24.925: INFO: namespace e2e-tests-pods-t8t9p deletion completed in 46.085759181s • [SLOW TEST:50.355 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:39:24.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-f3cb4366-6843-11ea-b08f-0242ac11000f STEP: Creating configMap with name cm-test-opt-upd-f3cb43dc-6843-11ea-b08f-0242ac11000f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f3cb4366-6843-11ea-b08f-0242ac11000f STEP: Updating configmap cm-test-opt-upd-f3cb43dc-6843-11ea-b08f-0242ac11000f STEP: Creating configMap with name cm-test-opt-create-f3cb440a-6843-11ea-b08f-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:39:33.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2xvd9" for this suite. Mar 17 11:39:55.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:39:55.313: INFO: namespace: e2e-tests-projected-2xvd9, resource: bindings, ignored listing per whitelist Mar 17 11:39:55.357: INFO: namespace e2e-tests-projected-2xvd9 deletion completed in 22.096607324s • [SLOW TEST:30.431 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:39:55.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-rv94z Mar 17 11:39:59.474: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-rv94z STEP: checking the pod's current state and verifying that restartCount is present Mar 17 11:39:59.476: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:44:00.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rv94z" for this suite. Mar 17 11:44:06.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:44:06.293: INFO: namespace: e2e-tests-container-probe-rv94z, resource: bindings, ignored listing per whitelist Mar 17 11:44:06.342: INFO: namespace e2e-tests-container-probe-rv94z deletion completed in 6.1160378s • [SLOW TEST:250.985 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:44:06.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 17 11:44:06.440: INFO: Waiting up to 5m0s for pod "downward-api-9b825873-6844-11ea-b08f-0242ac11000f" in namespace "e2e-tests-downward-api-hsw6r" to be "success or failure" Mar 17 11:44:06.444: INFO: Pod "downward-api-9b825873-6844-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.449969ms Mar 17 11:44:08.498: INFO: Pod "downward-api-9b825873-6844-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057138636s Mar 17 11:44:10.502: INFO: Pod "downward-api-9b825873-6844-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061565051s STEP: Saw pod success Mar 17 11:44:10.502: INFO: Pod "downward-api-9b825873-6844-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:44:10.506: INFO: Trying to get logs from node hunter-worker2 pod downward-api-9b825873-6844-11ea-b08f-0242ac11000f container dapi-container: STEP: delete the pod Mar 17 11:44:10.529: INFO: Waiting for pod downward-api-9b825873-6844-11ea-b08f-0242ac11000f to disappear Mar 17 11:44:10.534: INFO: Pod downward-api-9b825873-6844-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:44:10.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hsw6r" for this suite. Mar 17 11:44:16.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:44:16.630: INFO: namespace: e2e-tests-downward-api-hsw6r, resource: bindings, ignored listing per whitelist Mar 17 11:44:16.653: INFO: namespace e2e-tests-downward-api-hsw6r deletion completed in 6.115956786s • [SLOW TEST:10.311 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:44:16.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 17 11:44:23.814: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:44:24.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-v7vvq" for this suite. Mar 17 11:44:46.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:44:46.866: INFO: namespace: e2e-tests-replicaset-v7vvq, resource: bindings, ignored listing per whitelist Mar 17 11:44:46.938: INFO: namespace e2e-tests-replicaset-v7vvq deletion completed in 22.093341682s • [SLOW TEST:30.285 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:44:46.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-b3b62f3d-6844-11ea-b08f-0242ac11000f Mar 17 11:44:47.052: INFO: Pod name my-hostname-basic-b3b62f3d-6844-11ea-b08f-0242ac11000f: Found 0 pods out of 1 Mar 17 11:44:52.056: INFO: Pod name my-hostname-basic-b3b62f3d-6844-11ea-b08f-0242ac11000f: Found 1 pods out of 1 Mar 17 11:44:52.056: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-b3b62f3d-6844-11ea-b08f-0242ac11000f" are running Mar 17 11:44:52.059: INFO: Pod "my-hostname-basic-b3b62f3d-6844-11ea-b08f-0242ac11000f-96884" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-17 11:44:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-17 11:44:49 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-17 11:44:49 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-17 11:44:47 +0000 UTC Reason: Message:}]) Mar 17 11:44:52.059: INFO: Trying to dial the pod Mar 17 11:44:57.070: INFO: Controller my-hostname-basic-b3b62f3d-6844-11ea-b08f-0242ac11000f: Got expected result from replica 1 [my-hostname-basic-b3b62f3d-6844-11ea-b08f-0242ac11000f-96884]: "my-hostname-basic-b3b62f3d-6844-11ea-b08f-0242ac11000f-96884", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:44:57.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-5lczl" for this suite. Mar 17 11:45:03.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:45:03.119: INFO: namespace: e2e-tests-replication-controller-5lczl, resource: bindings, ignored listing per whitelist Mar 17 11:45:03.170: INFO: namespace e2e-tests-replication-controller-5lczl deletion completed in 6.096657673s • [SLOW TEST:16.232 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:45:03.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 11:45:03.296: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd668a0b-6844-11ea-b08f-0242ac11000f" in namespace "e2e-tests-downward-api-4wjbb" to be "success or failure" Mar 17 11:45:03.310: INFO: Pod "downwardapi-volume-bd668a0b-6844-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.094922ms Mar 17 11:45:05.314: INFO: Pod "downwardapi-volume-bd668a0b-6844-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018112876s Mar 17 11:45:07.318: INFO: Pod "downwardapi-volume-bd668a0b-6844-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022034019s STEP: Saw pod success Mar 17 11:45:07.318: INFO: Pod "downwardapi-volume-bd668a0b-6844-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:45:07.321: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-bd668a0b-6844-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 11:45:07.339: INFO: Waiting for pod downwardapi-volume-bd668a0b-6844-11ea-b08f-0242ac11000f to disappear Mar 17 11:45:07.349: INFO: Pod downwardapi-volume-bd668a0b-6844-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:45:07.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4wjbb" for this suite. Mar 17 11:45:13.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:45:13.391: INFO: namespace: e2e-tests-downward-api-4wjbb, resource: bindings, ignored listing per whitelist Mar 17 11:45:13.471: INFO: namespace e2e-tests-downward-api-4wjbb deletion completed in 6.119039725s • [SLOW TEST:10.301 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:45:13.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:45:17.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-v9c89" for this suite. Mar 17 11:46:03.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:46:03.683: INFO: namespace: e2e-tests-kubelet-test-v9c89, resource: bindings, ignored listing per whitelist Mar 17 11:46:03.720: INFO: namespace e2e-tests-kubelet-test-v9c89 deletion completed in 46.101611603s • [SLOW TEST:50.249 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:46:03.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:46:03.819: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 17 11:46:03.848: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 17 11:46:08.870: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 17 11:46:08.870: INFO: Creating deployment "test-rolling-update-deployment" Mar 17 11:46:08.875: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 17 11:46:08.881: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 17 11:46:10.888: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 17 11:46:10.891: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720042368, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720042368, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720042368, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720042368, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 17 11:46:12.895: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 17 11:46:12.930: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-sgkmx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sgkmx/deployments/test-rolling-update-deployment,UID:e47da337-6844-11ea-99e8-0242ac110002,ResourceVersion:321171,Generation:1,CreationTimestamp:2020-03-17 11:46:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-17 11:46:08 +0000 UTC 2020-03-17 11:46:08 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-17 11:46:11 +0000 UTC 2020-03-17 11:46:08 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 17 11:46:12.934: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-sgkmx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sgkmx/replicasets/test-rolling-update-deployment-75db98fb4c,UID:e47fe85d-6844-11ea-99e8-0242ac110002,ResourceVersion:321162,Generation:1,CreationTimestamp:2020-03-17 11:46:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment e47da337-6844-11ea-99e8-0242ac110002 0xc0024a40f7 0xc0024a40f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 17 11:46:12.934: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 17 11:46:12.934: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-sgkmx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sgkmx/replicasets/test-rolling-update-controller,UID:e17ab9d0-6844-11ea-99e8-0242ac110002,ResourceVersion:321170,Generation:2,CreationTimestamp:2020-03-17 11:46:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment e47da337-6844-11ea-99e8-0242ac110002 0xc0024a4037 0xc0024a4038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 17 11:46:12.938: INFO: Pod "test-rolling-update-deployment-75db98fb4c-szsnz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-szsnz,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-sgkmx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-sgkmx/pods/test-rolling-update-deployment-75db98fb4c-szsnz,UID:e4807a28-6844-11ea-99e8-0242ac110002,ResourceVersion:321161,Generation:0,CreationTimestamp:2020-03-17 11:46:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c e47fe85d-6844-11ea-99e8-0242ac110002 0xc0024a49d7 0xc0024a49d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-522d7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-522d7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-522d7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024a4a50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024a4a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:46:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:46:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:46:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:46:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.84,StartTime:2020-03-17 11:46:08 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-17 11:46:11 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://55a192d20518d22e4233aa440a0caf9dd65156a207bcc1295fff8537c7630c99}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:46:12.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-sgkmx" for this suite. Mar 17 11:46:18.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:46:18.986: INFO: namespace: e2e-tests-deployment-sgkmx, resource: bindings, ignored listing per whitelist Mar 17 11:46:19.024: INFO: namespace e2e-tests-deployment-sgkmx deletion completed in 6.08297232s • [SLOW TEST:15.304 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:46:19.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 17 11:46:19.115: INFO: Waiting up to 5m0s for pod "downward-api-ea95a1b2-6844-11ea-b08f-0242ac11000f" in namespace "e2e-tests-downward-api-5wj9z" to be "success or failure" Mar 17 11:46:19.124: INFO: Pod "downward-api-ea95a1b2-6844-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.068652ms Mar 17 11:46:21.140: INFO: Pod "downward-api-ea95a1b2-6844-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024617231s Mar 17 11:46:23.144: INFO: Pod "downward-api-ea95a1b2-6844-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028953925s STEP: Saw pod success Mar 17 11:46:23.144: INFO: Pod "downward-api-ea95a1b2-6844-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:46:23.147: INFO: Trying to get logs from node hunter-worker pod downward-api-ea95a1b2-6844-11ea-b08f-0242ac11000f container dapi-container: STEP: delete the pod Mar 17 11:46:23.244: INFO: Waiting for pod downward-api-ea95a1b2-6844-11ea-b08f-0242ac11000f to disappear Mar 17 11:46:23.250: INFO: Pod downward-api-ea95a1b2-6844-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:46:23.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5wj9z" for this suite. Mar 17 11:46:29.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:46:29.300: INFO: namespace: e2e-tests-downward-api-5wj9z, resource: bindings, ignored listing per whitelist Mar 17 11:46:29.370: INFO: namespace e2e-tests-downward-api-5wj9z deletion completed in 6.116846268s • [SLOW TEST:10.345 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:46:29.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-csdjj STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-csdjj to expose endpoints map[] Mar 17 11:46:29.527: INFO: Get endpoints failed (13.135545ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 17 11:46:30.531: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-csdjj exposes endpoints map[] (1.016514569s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-csdjj STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-csdjj to expose endpoints map[pod1:[100]] Mar 17 11:46:33.582: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-csdjj exposes endpoints map[pod1:[100]] (3.044636557s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-csdjj STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-csdjj to expose endpoints map[pod2:[101] pod1:[100]] Mar 17 11:46:36.678: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-csdjj exposes endpoints map[pod1:[100] pod2:[101]] (3.091839439s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-csdjj STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-csdjj to expose endpoints map[pod2:[101]] Mar 17 11:46:37.703: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-csdjj exposes endpoints map[pod2:[101]] (1.020798769s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-csdjj STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-csdjj to expose endpoints map[] Mar 17 11:46:38.736: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-csdjj exposes endpoints map[] (1.02783733s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:46:38.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-csdjj" for this suite. Mar 17 11:46:44.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:46:44.833: INFO: namespace: e2e-tests-services-csdjj, resource: bindings, ignored listing per whitelist Mar 17 11:46:44.863: INFO: namespace e2e-tests-services-csdjj deletion completed in 6.079069332s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:15.493 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:46:44.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-5rhw STEP: Creating a pod to test atomic-volume-subpath Mar 17 11:46:45.001: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-5rhw" in namespace "e2e-tests-subpath-mdq64" to be "success or failure" Mar 17 11:46:45.005: INFO: Pod "pod-subpath-test-secret-5rhw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.353737ms Mar 17 11:46:47.009: INFO: Pod "pod-subpath-test-secret-5rhw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008404069s Mar 17 11:46:49.014: INFO: Pod "pod-subpath-test-secret-5rhw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012658485s Mar 17 11:46:51.032: INFO: Pod "pod-subpath-test-secret-5rhw": Phase="Running", Reason="", readiness=false. Elapsed: 6.031395649s Mar 17 11:46:53.036: INFO: Pod "pod-subpath-test-secret-5rhw": Phase="Running", Reason="", readiness=false. Elapsed: 8.035280606s Mar 17 11:46:55.040: INFO: Pod "pod-subpath-test-secret-5rhw": Phase="Running", Reason="", readiness=false. Elapsed: 10.038985319s Mar 17 11:46:57.044: INFO: Pod "pod-subpath-test-secret-5rhw": Phase="Running", Reason="", readiness=false. Elapsed: 12.042878428s Mar 17 11:46:59.048: INFO: Pod "pod-subpath-test-secret-5rhw": Phase="Running", Reason="", readiness=false. Elapsed: 14.046644057s Mar 17 11:47:01.068: INFO: Pod "pod-subpath-test-secret-5rhw": Phase="Running", Reason="", readiness=false. Elapsed: 16.067373211s Mar 17 11:47:03.072: INFO: Pod "pod-subpath-test-secret-5rhw": Phase="Running", Reason="", readiness=false. Elapsed: 18.071188755s Mar 17 11:47:05.076: INFO: Pod "pod-subpath-test-secret-5rhw": Phase="Running", Reason="", readiness=false. Elapsed: 20.074785097s Mar 17 11:47:07.080: INFO: Pod "pod-subpath-test-secret-5rhw": Phase="Running", Reason="", readiness=false. Elapsed: 22.078774682s Mar 17 11:47:09.128: INFO: Pod "pod-subpath-test-secret-5rhw": Phase="Running", Reason="", readiness=false. Elapsed: 24.127355106s Mar 17 11:47:11.132: INFO: Pod "pod-subpath-test-secret-5rhw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.131372892s STEP: Saw pod success Mar 17 11:47:11.132: INFO: Pod "pod-subpath-test-secret-5rhw" satisfied condition "success or failure" Mar 17 11:47:11.135: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-5rhw container test-container-subpath-secret-5rhw: STEP: delete the pod Mar 17 11:47:11.194: INFO: Waiting for pod pod-subpath-test-secret-5rhw to disappear Mar 17 11:47:11.215: INFO: Pod pod-subpath-test-secret-5rhw no longer exists STEP: Deleting pod pod-subpath-test-secret-5rhw Mar 17 11:47:11.215: INFO: Deleting pod "pod-subpath-test-secret-5rhw" in namespace "e2e-tests-subpath-mdq64" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:47:11.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-mdq64" for this suite. Mar 17 11:47:17.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:47:17.301: INFO: namespace: e2e-tests-subpath-mdq64, resource: bindings, ignored listing per whitelist Mar 17 11:47:17.311: INFO: namespace e2e-tests-subpath-mdq64 deletion completed in 6.091020517s • [SLOW TEST:32.447 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:47:17.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Mar 17 11:47:17.427: INFO: Waiting up to 5m0s for pod "var-expansion-0d58e874-6845-11ea-b08f-0242ac11000f" in namespace "e2e-tests-var-expansion-2chpv" to be "success or failure" Mar 17 11:47:17.431: INFO: Pod "var-expansion-0d58e874-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.275391ms Mar 17 11:47:19.437: INFO: Pod "var-expansion-0d58e874-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010373968s Mar 17 11:47:21.442: INFO: Pod "var-expansion-0d58e874-6845-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014828606s STEP: Saw pod success Mar 17 11:47:21.442: INFO: Pod "var-expansion-0d58e874-6845-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:47:21.445: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-0d58e874-6845-11ea-b08f-0242ac11000f container dapi-container: STEP: delete the pod Mar 17 11:47:21.462: INFO: Waiting for pod var-expansion-0d58e874-6845-11ea-b08f-0242ac11000f to disappear Mar 17 11:47:21.467: INFO: Pod var-expansion-0d58e874-6845-11ea-b08f-0242ac11000f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:47:21.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-2chpv" for this suite. Mar 17 11:47:27.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:47:27.574: INFO: namespace: e2e-tests-var-expansion-2chpv, resource: bindings, ignored listing per whitelist Mar 17 11:47:27.583: INFO: namespace e2e-tests-var-expansion-2chpv deletion completed in 6.114025689s • [SLOW TEST:10.272 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:47:27.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Mar 17 11:47:27.711: INFO: Waiting up to 5m0s for pod "var-expansion-137aadb0-6845-11ea-b08f-0242ac11000f" in namespace "e2e-tests-var-expansion-hdhfp" to be "success or failure" Mar 17 11:47:27.714: INFO: Pod "var-expansion-137aadb0-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.878969ms Mar 17 11:47:29.718: INFO: Pod "var-expansion-137aadb0-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006831091s Mar 17 11:47:31.722: INFO: Pod "var-expansion-137aadb0-6845-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010347031s STEP: Saw pod success Mar 17 11:47:31.722: INFO: Pod "var-expansion-137aadb0-6845-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:47:31.724: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-137aadb0-6845-11ea-b08f-0242ac11000f container dapi-container: STEP: delete the pod Mar 17 11:47:31.752: INFO: Waiting for pod var-expansion-137aadb0-6845-11ea-b08f-0242ac11000f to disappear Mar 17 11:47:31.757: INFO: Pod var-expansion-137aadb0-6845-11ea-b08f-0242ac11000f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:47:31.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-hdhfp" for this suite. Mar 17 11:47:37.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:47:37.779: INFO: namespace: e2e-tests-var-expansion-hdhfp, resource: bindings, ignored listing per whitelist Mar 17 11:47:37.843: INFO: namespace e2e-tests-var-expansion-hdhfp deletion completed in 6.083374471s • [SLOW TEST:10.260 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:47:37.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-b2nhq [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-b2nhq STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-b2nhq Mar 17 11:47:37.988: INFO: Found 0 stateful pods, waiting for 1 Mar 17 11:47:47.992: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 17 11:47:47.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b2nhq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 17 11:47:48.220: INFO: stderr: "I0317 11:47:48.133064 1605 log.go:172] (0xc000138630) (0xc00072c640) Create stream\nI0317 11:47:48.133242 1605 log.go:172] (0xc000138630) (0xc00072c640) Stream added, broadcasting: 1\nI0317 11:47:48.136492 1605 log.go:172] (0xc000138630) Reply frame received for 1\nI0317 11:47:48.136530 1605 log.go:172] (0xc000138630) (0xc00072c6e0) Create stream\nI0317 11:47:48.136544 1605 log.go:172] (0xc000138630) (0xc00072c6e0) Stream added, broadcasting: 3\nI0317 11:47:48.138503 1605 log.go:172] (0xc000138630) Reply frame received for 3\nI0317 11:47:48.138546 1605 log.go:172] (0xc000138630) (0xc00072c780) Create stream\nI0317 11:47:48.138567 1605 log.go:172] (0xc000138630) (0xc00072c780) Stream added, broadcasting: 5\nI0317 11:47:48.139508 1605 log.go:172] (0xc000138630) Reply frame received for 5\nI0317 11:47:48.214824 1605 log.go:172] (0xc000138630) Data frame received for 3\nI0317 11:47:48.214854 1605 log.go:172] (0xc00072c6e0) (3) Data frame handling\nI0317 11:47:48.214865 1605 log.go:172] (0xc00072c6e0) (3) Data frame sent\nI0317 11:47:48.214870 1605 log.go:172] (0xc000138630) Data frame received for 3\nI0317 11:47:48.214874 1605 log.go:172] (0xc00072c6e0) (3) Data frame handling\nI0317 11:47:48.214958 1605 log.go:172] (0xc000138630) Data frame received for 5\nI0317 11:47:48.214968 1605 log.go:172] (0xc00072c780) (5) Data frame handling\nI0317 11:47:48.217314 1605 log.go:172] (0xc000138630) Data frame received for 1\nI0317 11:47:48.217355 1605 log.go:172] (0xc00072c640) (1) Data frame handling\nI0317 11:47:48.217389 1605 log.go:172] (0xc00072c640) (1) Data frame sent\nI0317 11:47:48.217427 1605 log.go:172] (0xc000138630) (0xc00072c640) Stream removed, broadcasting: 1\nI0317 11:47:48.217454 1605 log.go:172] (0xc000138630) Go away received\nI0317 11:47:48.217598 1605 log.go:172] (0xc000138630) (0xc00072c640) Stream removed, broadcasting: 1\nI0317 11:47:48.217674 1605 log.go:172] (0xc000138630) (0xc00072c6e0) Stream removed, broadcasting: 3\nI0317 11:47:48.217681 1605 log.go:172] (0xc000138630) (0xc00072c780) Stream removed, broadcasting: 5\n" Mar 17 11:47:48.220: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 17 11:47:48.220: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 17 11:47:48.223: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 17 11:47:58.228: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 17 11:47:58.228: INFO: Waiting for statefulset status.replicas updated to 0 Mar 17 11:47:58.247: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999703s Mar 17 11:47:59.250: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991690234s Mar 17 11:48:00.255: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.988672275s Mar 17 11:48:01.260: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.983644595s Mar 17 11:48:02.264: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.97867995s Mar 17 11:48:03.284: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.974275839s Mar 17 11:48:04.289: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.954121102s Mar 17 11:48:05.303: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.949077806s Mar 17 11:48:06.307: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.935659743s Mar 17 11:48:07.311: INFO: Verifying statefulset ss doesn't scale past 1 for another 931.220574ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-b2nhq Mar 17 11:48:08.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b2nhq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:48:08.549: INFO: stderr: "I0317 11:48:08.453984 1626 log.go:172] (0xc000138840) (0xc0005cb360) Create stream\nI0317 11:48:08.454398 1626 log.go:172] (0xc000138840) (0xc0005cb360) Stream added, broadcasting: 1\nI0317 11:48:08.459329 1626 log.go:172] (0xc000138840) Reply frame received for 1\nI0317 11:48:08.459403 1626 log.go:172] (0xc000138840) (0xc0005cb400) Create stream\nI0317 11:48:08.459432 1626 log.go:172] (0xc000138840) (0xc0005cb400) Stream added, broadcasting: 3\nI0317 11:48:08.460466 1626 log.go:172] (0xc000138840) Reply frame received for 3\nI0317 11:48:08.460509 1626 log.go:172] (0xc000138840) (0xc000744000) Create stream\nI0317 11:48:08.460532 1626 log.go:172] (0xc000138840) (0xc000744000) Stream added, broadcasting: 5\nI0317 11:48:08.461514 1626 log.go:172] (0xc000138840) Reply frame received for 5\nI0317 11:48:08.544991 1626 log.go:172] (0xc000138840) Data frame received for 5\nI0317 11:48:08.545039 1626 log.go:172] (0xc000138840) Data frame received for 3\nI0317 11:48:08.545090 1626 log.go:172] (0xc0005cb400) (3) Data frame handling\nI0317 11:48:08.545264 1626 log.go:172] (0xc000744000) (5) Data frame handling\nI0317 11:48:08.545320 1626 log.go:172] (0xc0005cb400) (3) Data frame sent\nI0317 11:48:08.545347 1626 log.go:172] (0xc000138840) Data frame received for 3\nI0317 11:48:08.545370 1626 log.go:172] (0xc0005cb400) (3) Data frame handling\nI0317 11:48:08.547006 1626 log.go:172] (0xc000138840) Data frame received for 1\nI0317 11:48:08.547035 1626 log.go:172] (0xc0005cb360) (1) Data frame handling\nI0317 11:48:08.547050 1626 log.go:172] (0xc0005cb360) (1) Data frame sent\nI0317 11:48:08.547068 1626 log.go:172] (0xc000138840) (0xc0005cb360) Stream removed, broadcasting: 1\nI0317 11:48:08.547132 1626 log.go:172] (0xc000138840) Go away received\nI0317 11:48:08.547209 1626 log.go:172] (0xc000138840) (0xc0005cb360) Stream removed, broadcasting: 1\nI0317 11:48:08.547223 1626 log.go:172] (0xc000138840) (0xc0005cb400) Stream removed, broadcasting: 3\nI0317 11:48:08.547232 1626 log.go:172] (0xc000138840) (0xc000744000) Stream removed, broadcasting: 5\n" Mar 17 11:48:08.550: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 17 11:48:08.550: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 17 11:48:08.554: INFO: Found 1 stateful pods, waiting for 3 Mar 17 11:48:18.559: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 17 11:48:18.559: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 17 11:48:18.559: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 17 11:48:18.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b2nhq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 17 11:48:18.777: INFO: stderr: "I0317 11:48:18.693235 1649 log.go:172] (0xc000138840) (0xc000750640) Create stream\nI0317 11:48:18.693305 1649 log.go:172] (0xc000138840) (0xc000750640) Stream added, broadcasting: 1\nI0317 11:48:18.697686 1649 log.go:172] (0xc000138840) Reply frame received for 1\nI0317 11:48:18.697752 1649 log.go:172] (0xc000138840) (0xc00065adc0) Create stream\nI0317 11:48:18.697765 1649 log.go:172] (0xc000138840) (0xc00065adc0) Stream added, broadcasting: 3\nI0317 11:48:18.699514 1649 log.go:172] (0xc000138840) Reply frame received for 3\nI0317 11:48:18.699584 1649 log.go:172] (0xc000138840) (0xc0007506e0) Create stream\nI0317 11:48:18.699606 1649 log.go:172] (0xc000138840) (0xc0007506e0) Stream added, broadcasting: 5\nI0317 11:48:18.700776 1649 log.go:172] (0xc000138840) Reply frame received for 5\nI0317 11:48:18.772730 1649 log.go:172] (0xc000138840) Data frame received for 3\nI0317 11:48:18.772877 1649 log.go:172] (0xc00065adc0) (3) Data frame handling\nI0317 11:48:18.772895 1649 log.go:172] (0xc00065adc0) (3) Data frame sent\nI0317 11:48:18.772903 1649 log.go:172] (0xc000138840) Data frame received for 3\nI0317 11:48:18.772910 1649 log.go:172] (0xc00065adc0) (3) Data frame handling\nI0317 11:48:18.772942 1649 log.go:172] (0xc000138840) Data frame received for 5\nI0317 11:48:18.772952 1649 log.go:172] (0xc0007506e0) (5) Data frame handling\nI0317 11:48:18.774220 1649 log.go:172] (0xc000138840) Data frame received for 1\nI0317 11:48:18.774273 1649 log.go:172] (0xc000750640) (1) Data frame handling\nI0317 11:48:18.774306 1649 log.go:172] (0xc000750640) (1) Data frame sent\nI0317 11:48:18.774335 1649 log.go:172] (0xc000138840) (0xc000750640) Stream removed, broadcasting: 1\nI0317 11:48:18.774373 1649 log.go:172] (0xc000138840) Go away received\nI0317 11:48:18.774514 1649 log.go:172] (0xc000138840) (0xc000750640) Stream removed, broadcasting: 1\nI0317 11:48:18.774526 1649 log.go:172] (0xc000138840) (0xc00065adc0) Stream removed, broadcasting: 3\nI0317 11:48:18.774532 1649 log.go:172] (0xc000138840) (0xc0007506e0) Stream removed, broadcasting: 5\n" Mar 17 11:48:18.777: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 17 11:48:18.777: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 17 11:48:18.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b2nhq ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 17 11:48:18.998: INFO: stderr: "I0317 11:48:18.908545 1671 log.go:172] (0xc0007aa160) (0xc0006ce6e0) Create stream\nI0317 11:48:18.908606 1671 log.go:172] (0xc0007aa160) (0xc0006ce6e0) Stream added, broadcasting: 1\nI0317 11:48:18.911569 1671 log.go:172] (0xc0007aa160) Reply frame received for 1\nI0317 11:48:18.911623 1671 log.go:172] (0xc0007aa160) (0xc000686be0) Create stream\nI0317 11:48:18.911646 1671 log.go:172] (0xc0007aa160) (0xc000686be0) Stream added, broadcasting: 3\nI0317 11:48:18.912589 1671 log.go:172] (0xc0007aa160) Reply frame received for 3\nI0317 11:48:18.912628 1671 log.go:172] (0xc0007aa160) (0xc000508000) Create stream\nI0317 11:48:18.912645 1671 log.go:172] (0xc0007aa160) (0xc000508000) Stream added, broadcasting: 5\nI0317 11:48:18.914323 1671 log.go:172] (0xc0007aa160) Reply frame received for 5\nI0317 11:48:18.991628 1671 log.go:172] (0xc0007aa160) Data frame received for 3\nI0317 11:48:18.991656 1671 log.go:172] (0xc000686be0) (3) Data frame handling\nI0317 11:48:18.991670 1671 log.go:172] (0xc000686be0) (3) Data frame sent\nI0317 11:48:18.991894 1671 log.go:172] (0xc0007aa160) Data frame received for 3\nI0317 11:48:18.991945 1671 log.go:172] (0xc000686be0) (3) Data frame handling\nI0317 11:48:18.991974 1671 log.go:172] (0xc0007aa160) Data frame received for 5\nI0317 11:48:18.991990 1671 log.go:172] (0xc000508000) (5) Data frame handling\nI0317 11:48:18.993988 1671 log.go:172] (0xc0007aa160) Data frame received for 1\nI0317 11:48:18.994030 1671 log.go:172] (0xc0006ce6e0) (1) Data frame handling\nI0317 11:48:18.994045 1671 log.go:172] (0xc0006ce6e0) (1) Data frame sent\nI0317 11:48:18.994076 1671 log.go:172] (0xc0007aa160) (0xc0006ce6e0) Stream removed, broadcasting: 1\nI0317 11:48:18.994108 1671 log.go:172] (0xc0007aa160) Go away received\nI0317 11:48:18.994303 1671 log.go:172] (0xc0007aa160) (0xc0006ce6e0) Stream removed, broadcasting: 1\nI0317 11:48:18.994325 1671 log.go:172] (0xc0007aa160) (0xc000686be0) Stream removed, broadcasting: 3\nI0317 11:48:18.994336 1671 log.go:172] (0xc0007aa160) (0xc000508000) Stream removed, broadcasting: 5\n" Mar 17 11:48:18.998: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 17 11:48:18.998: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 17 11:48:18.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b2nhq ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 17 11:48:19.208: INFO: stderr: "I0317 11:48:19.113263 1694 log.go:172] (0xc000138840) (0xc00065b400) Create stream\nI0317 11:48:19.113318 1694 log.go:172] (0xc000138840) (0xc00065b400) Stream added, broadcasting: 1\nI0317 11:48:19.115587 1694 log.go:172] (0xc000138840) Reply frame received for 1\nI0317 11:48:19.115665 1694 log.go:172] (0xc000138840) (0xc0006ea000) Create stream\nI0317 11:48:19.115696 1694 log.go:172] (0xc000138840) (0xc0006ea000) Stream added, broadcasting: 3\nI0317 11:48:19.116814 1694 log.go:172] (0xc000138840) Reply frame received for 3\nI0317 11:48:19.116858 1694 log.go:172] (0xc000138840) (0xc00065b4a0) Create stream\nI0317 11:48:19.116871 1694 log.go:172] (0xc000138840) (0xc00065b4a0) Stream added, broadcasting: 5\nI0317 11:48:19.117955 1694 log.go:172] (0xc000138840) Reply frame received for 5\nI0317 11:48:19.200680 1694 log.go:172] (0xc000138840) Data frame received for 5\nI0317 11:48:19.200745 1694 log.go:172] (0xc00065b4a0) (5) Data frame handling\nI0317 11:48:19.200783 1694 log.go:172] (0xc000138840) Data frame received for 3\nI0317 11:48:19.200808 1694 log.go:172] (0xc0006ea000) (3) Data frame handling\nI0317 11:48:19.200837 1694 log.go:172] (0xc0006ea000) (3) Data frame sent\nI0317 11:48:19.200860 1694 log.go:172] (0xc000138840) Data frame received for 3\nI0317 11:48:19.200877 1694 log.go:172] (0xc0006ea000) (3) Data frame handling\nI0317 11:48:19.203236 1694 log.go:172] (0xc000138840) Data frame received for 1\nI0317 11:48:19.203283 1694 log.go:172] (0xc00065b400) (1) Data frame handling\nI0317 11:48:19.203403 1694 log.go:172] (0xc00065b400) (1) Data frame sent\nI0317 11:48:19.203544 1694 log.go:172] (0xc000138840) (0xc00065b400) Stream removed, broadcasting: 1\nI0317 11:48:19.203680 1694 log.go:172] (0xc000138840) Go away received\nI0317 11:48:19.203867 1694 log.go:172] (0xc000138840) (0xc00065b400) Stream removed, broadcasting: 1\nI0317 11:48:19.203908 1694 log.go:172] (0xc000138840) (0xc0006ea000) Stream removed, broadcasting: 3\nI0317 11:48:19.203932 1694 log.go:172] (0xc000138840) (0xc00065b4a0) Stream removed, broadcasting: 5\n" Mar 17 11:48:19.208: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 17 11:48:19.208: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 17 11:48:19.208: INFO: Waiting for statefulset status.replicas updated to 0 Mar 17 11:48:19.211: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 17 11:48:29.220: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 17 11:48:29.220: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 17 11:48:29.220: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 17 11:48:29.243: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999643s Mar 17 11:48:30.248: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.983769943s Mar 17 11:48:31.253: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.978759104s Mar 17 11:48:32.268: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.973629775s Mar 17 11:48:33.273: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.958561361s Mar 17 11:48:34.279: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.952977491s Mar 17 11:48:35.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.947676455s Mar 17 11:48:36.289: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.94239028s Mar 17 11:48:37.294: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.937213786s Mar 17 11:48:38.300: INFO: Verifying statefulset ss doesn't scale past 3 for another 931.8681ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-b2nhq Mar 17 11:48:39.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b2nhq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:48:39.520: INFO: stderr: "I0317 11:48:39.440847 1716 log.go:172] (0xc0007a02c0) (0xc0006e74a0) Create stream\nI0317 11:48:39.440921 1716 log.go:172] (0xc0007a02c0) (0xc0006e74a0) Stream added, broadcasting: 1\nI0317 11:48:39.443430 1716 log.go:172] (0xc0007a02c0) Reply frame received for 1\nI0317 11:48:39.443481 1716 log.go:172] (0xc0007a02c0) (0xc0006e7540) Create stream\nI0317 11:48:39.443499 1716 log.go:172] (0xc0007a02c0) (0xc0006e7540) Stream added, broadcasting: 3\nI0317 11:48:39.444338 1716 log.go:172] (0xc0007a02c0) Reply frame received for 3\nI0317 11:48:39.444367 1716 log.go:172] (0xc0007a02c0) (0xc0006e75e0) Create stream\nI0317 11:48:39.444374 1716 log.go:172] (0xc0007a02c0) (0xc0006e75e0) Stream added, broadcasting: 5\nI0317 11:48:39.445221 1716 log.go:172] (0xc0007a02c0) Reply frame received for 5\nI0317 11:48:39.514375 1716 log.go:172] (0xc0007a02c0) Data frame received for 5\nI0317 11:48:39.514415 1716 log.go:172] (0xc0006e75e0) (5) Data frame handling\nI0317 11:48:39.514438 1716 log.go:172] (0xc0007a02c0) Data frame received for 3\nI0317 11:48:39.514451 1716 log.go:172] (0xc0006e7540) (3) Data frame handling\nI0317 11:48:39.514469 1716 log.go:172] (0xc0006e7540) (3) Data frame sent\nI0317 11:48:39.514481 1716 log.go:172] (0xc0007a02c0) Data frame received for 3\nI0317 11:48:39.514490 1716 log.go:172] (0xc0006e7540) (3) Data frame handling\nI0317 11:48:39.515973 1716 log.go:172] (0xc0007a02c0) Data frame received for 1\nI0317 11:48:39.515988 1716 log.go:172] (0xc0006e74a0) (1) Data frame handling\nI0317 11:48:39.516008 1716 log.go:172] (0xc0006e74a0) (1) Data frame sent\nI0317 11:48:39.516037 1716 log.go:172] (0xc0007a02c0) (0xc0006e74a0) Stream removed, broadcasting: 1\nI0317 11:48:39.516188 1716 log.go:172] (0xc0007a02c0) (0xc0006e74a0) Stream removed, broadcasting: 1\nI0317 11:48:39.516201 1716 log.go:172] (0xc0007a02c0) (0xc0006e7540) Stream removed, broadcasting: 3\nI0317 11:48:39.516239 1716 log.go:172] (0xc0007a02c0) Go away received\nI0317 11:48:39.516364 1716 log.go:172] (0xc0007a02c0) (0xc0006e75e0) Stream removed, broadcasting: 5\n" Mar 17 11:48:39.520: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 17 11:48:39.520: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 17 11:48:39.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b2nhq ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:48:39.712: INFO: stderr: "I0317 11:48:39.643598 1738 log.go:172] (0xc00014c840) (0xc00073e640) Create stream\nI0317 11:48:39.643672 1738 log.go:172] (0xc00014c840) (0xc00073e640) Stream added, broadcasting: 1\nI0317 11:48:39.646188 1738 log.go:172] (0xc00014c840) Reply frame received for 1\nI0317 11:48:39.646238 1738 log.go:172] (0xc00014c840) (0xc00073e6e0) Create stream\nI0317 11:48:39.646252 1738 log.go:172] (0xc00014c840) (0xc00073e6e0) Stream added, broadcasting: 3\nI0317 11:48:39.647277 1738 log.go:172] (0xc00014c840) Reply frame received for 3\nI0317 11:48:39.647330 1738 log.go:172] (0xc00014c840) (0xc00073e780) Create stream\nI0317 11:48:39.647345 1738 log.go:172] (0xc00014c840) (0xc00073e780) Stream added, broadcasting: 5\nI0317 11:48:39.648264 1738 log.go:172] (0xc00014c840) Reply frame received for 5\nI0317 11:48:39.707339 1738 log.go:172] (0xc00014c840) Data frame received for 3\nI0317 11:48:39.707367 1738 log.go:172] (0xc00073e6e0) (3) Data frame handling\nI0317 11:48:39.707388 1738 log.go:172] (0xc00073e6e0) (3) Data frame sent\nI0317 11:48:39.707398 1738 log.go:172] (0xc00014c840) Data frame received for 3\nI0317 11:48:39.707403 1738 log.go:172] (0xc00073e6e0) (3) Data frame handling\nI0317 11:48:39.707636 1738 log.go:172] (0xc00014c840) Data frame received for 5\nI0317 11:48:39.707651 1738 log.go:172] (0xc00073e780) (5) Data frame handling\nI0317 11:48:39.708779 1738 log.go:172] (0xc00014c840) Data frame received for 1\nI0317 11:48:39.708798 1738 log.go:172] (0xc00073e640) (1) Data frame handling\nI0317 11:48:39.708808 1738 log.go:172] (0xc00073e640) (1) Data frame sent\nI0317 11:48:39.708820 1738 log.go:172] (0xc00014c840) (0xc00073e640) Stream removed, broadcasting: 1\nI0317 11:48:39.708840 1738 log.go:172] (0xc00014c840) Go away received\nI0317 11:48:39.709029 1738 log.go:172] (0xc00014c840) (0xc00073e640) Stream removed, broadcasting: 1\nI0317 11:48:39.709047 1738 log.go:172] (0xc00014c840) (0xc00073e6e0) Stream removed, broadcasting: 3\nI0317 11:48:39.709061 1738 log.go:172] (0xc00014c840) (0xc00073e780) Stream removed, broadcasting: 5\n" Mar 17 11:48:39.712: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 17 11:48:39.712: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 17 11:48:39.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b2nhq ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:48:39.914: INFO: stderr: "I0317 11:48:39.849698 1760 log.go:172] (0xc0006f44d0) (0xc000732640) Create stream\nI0317 11:48:39.849782 1760 log.go:172] (0xc0006f44d0) (0xc000732640) Stream added, broadcasting: 1\nI0317 11:48:39.852282 1760 log.go:172] (0xc0006f44d0) Reply frame received for 1\nI0317 11:48:39.852315 1760 log.go:172] (0xc0006f44d0) (0xc0007326e0) Create stream\nI0317 11:48:39.852325 1760 log.go:172] (0xc0006f44d0) (0xc0007326e0) Stream added, broadcasting: 3\nI0317 11:48:39.853242 1760 log.go:172] (0xc0006f44d0) Reply frame received for 3\nI0317 11:48:39.853280 1760 log.go:172] (0xc0006f44d0) (0xc000732780) Create stream\nI0317 11:48:39.853294 1760 log.go:172] (0xc0006f44d0) (0xc000732780) Stream added, broadcasting: 5\nI0317 11:48:39.854360 1760 log.go:172] (0xc0006f44d0) Reply frame received for 5\nI0317 11:48:39.908118 1760 log.go:172] (0xc0006f44d0) Data frame received for 5\nI0317 11:48:39.908149 1760 log.go:172] (0xc000732780) (5) Data frame handling\nI0317 11:48:39.908205 1760 log.go:172] (0xc0006f44d0) Data frame received for 3\nI0317 11:48:39.908243 1760 log.go:172] (0xc0007326e0) (3) Data frame handling\nI0317 11:48:39.908278 1760 log.go:172] (0xc0007326e0) (3) Data frame sent\nI0317 11:48:39.908304 1760 log.go:172] (0xc0006f44d0) Data frame received for 3\nI0317 11:48:39.908320 1760 log.go:172] (0xc0007326e0) (3) Data frame handling\nI0317 11:48:39.910153 1760 log.go:172] (0xc0006f44d0) Data frame received for 1\nI0317 11:48:39.910176 1760 log.go:172] (0xc000732640) (1) Data frame handling\nI0317 11:48:39.910187 1760 log.go:172] (0xc000732640) (1) Data frame sent\nI0317 11:48:39.910202 1760 log.go:172] (0xc0006f44d0) (0xc000732640) Stream removed, broadcasting: 1\nI0317 11:48:39.910229 1760 log.go:172] (0xc0006f44d0) Go away received\nI0317 11:48:39.910487 1760 log.go:172] (0xc0006f44d0) (0xc000732640) Stream removed, broadcasting: 1\nI0317 11:48:39.910523 1760 log.go:172] (0xc0006f44d0) (0xc0007326e0) Stream removed, broadcasting: 3\nI0317 11:48:39.910544 1760 log.go:172] (0xc0006f44d0) (0xc000732780) Stream removed, broadcasting: 5\n" Mar 17 11:48:39.914: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 17 11:48:39.914: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 17 11:48:39.914: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 17 11:49:19.952: INFO: Deleting all statefulset in ns e2e-tests-statefulset-b2nhq Mar 17 11:49:19.955: INFO: Scaling statefulset ss to 0 Mar 17 11:49:19.964: INFO: Waiting for statefulset status.replicas updated to 0 Mar 17 11:49:19.966: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:49:19.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-b2nhq" for this suite. Mar 17 11:49:25.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:49:26.056: INFO: namespace: e2e-tests-statefulset-b2nhq, resource: bindings, ignored listing per whitelist Mar 17 11:49:26.062: INFO: namespace e2e-tests-statefulset-b2nhq deletion completed in 6.083466941s • [SLOW TEST:108.219 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:49:26.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-5a165461-6845-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 11:49:26.192: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a170714-6845-11ea-b08f-0242ac11000f" in namespace "e2e-tests-configmap-vrpkz" to be "success or failure" Mar 17 11:49:26.201: INFO: Pod "pod-configmaps-5a170714-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.500816ms Mar 17 11:49:28.205: INFO: Pod "pod-configmaps-5a170714-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01310196s Mar 17 11:49:30.209: INFO: Pod "pod-configmaps-5a170714-6845-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017105138s STEP: Saw pod success Mar 17 11:49:30.209: INFO: Pod "pod-configmaps-5a170714-6845-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:49:30.212: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-5a170714-6845-11ea-b08f-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 17 11:49:30.245: INFO: Waiting for pod pod-configmaps-5a170714-6845-11ea-b08f-0242ac11000f to disappear Mar 17 11:49:30.280: INFO: Pod pod-configmaps-5a170714-6845-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:49:30.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vrpkz" for this suite. Mar 17 11:49:36.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:49:36.320: INFO: namespace: e2e-tests-configmap-vrpkz, resource: bindings, ignored listing per whitelist Mar 17 11:49:36.367: INFO: namespace e2e-tests-configmap-vrpkz deletion completed in 6.083065428s • [SLOW TEST:10.304 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:49:36.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:49:36.456: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:49:40.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-5kdn8" for this suite. Mar 17 11:50:18.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:50:18.603: INFO: namespace: e2e-tests-pods-5kdn8, resource: bindings, ignored listing per whitelist Mar 17 11:50:18.635: INFO: namespace e2e-tests-pods-5kdn8 deletion completed in 38.095642447s • [SLOW TEST:42.268 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:50:18.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 17 11:50:18.816: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-7w6gc,SelfLink:/api/v1/namespaces/e2e-tests-watch-7w6gc/configmaps/e2e-watch-test-label-changed,UID:7970a0ca-6845-11ea-99e8-0242ac110002,ResourceVersion:322087,Generation:0,CreationTimestamp:2020-03-17 11:50:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 17 11:50:18.816: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-7w6gc,SelfLink:/api/v1/namespaces/e2e-tests-watch-7w6gc/configmaps/e2e-watch-test-label-changed,UID:7970a0ca-6845-11ea-99e8-0242ac110002,ResourceVersion:322088,Generation:0,CreationTimestamp:2020-03-17 11:50:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 17 11:50:18.816: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-7w6gc,SelfLink:/api/v1/namespaces/e2e-tests-watch-7w6gc/configmaps/e2e-watch-test-label-changed,UID:7970a0ca-6845-11ea-99e8-0242ac110002,ResourceVersion:322089,Generation:0,CreationTimestamp:2020-03-17 11:50:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 17 11:50:28.899: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-7w6gc,SelfLink:/api/v1/namespaces/e2e-tests-watch-7w6gc/configmaps/e2e-watch-test-label-changed,UID:7970a0ca-6845-11ea-99e8-0242ac110002,ResourceVersion:322110,Generation:0,CreationTimestamp:2020-03-17 11:50:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 17 11:50:28.899: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-7w6gc,SelfLink:/api/v1/namespaces/e2e-tests-watch-7w6gc/configmaps/e2e-watch-test-label-changed,UID:7970a0ca-6845-11ea-99e8-0242ac110002,ResourceVersion:322111,Generation:0,CreationTimestamp:2020-03-17 11:50:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 17 11:50:28.899: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-7w6gc,SelfLink:/api/v1/namespaces/e2e-tests-watch-7w6gc/configmaps/e2e-watch-test-label-changed,UID:7970a0ca-6845-11ea-99e8-0242ac110002,ResourceVersion:322112,Generation:0,CreationTimestamp:2020-03-17 11:50:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:50:28.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-7w6gc" for this suite. Mar 17 11:50:34.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:50:35.024: INFO: namespace: e2e-tests-watch-7w6gc, resource: bindings, ignored listing per whitelist Mar 17 11:50:35.100: INFO: namespace e2e-tests-watch-7w6gc deletion completed in 6.189635146s • [SLOW TEST:16.464 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:50:35.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 17 11:50:35.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-pmx7p' Mar 17 11:50:37.203: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 17 11:50:37.203: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Mar 17 11:50:37.211: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 17 11:50:37.231: INFO: scanned /root for discovery docs: Mar 17 11:50:37.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-pmx7p' Mar 17 11:50:53.107: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 17 11:50:53.107: INFO: stdout: "Created e2e-test-nginx-rc-2bc16aa581fd046c005f972fcdc142f7\nScaling up e2e-test-nginx-rc-2bc16aa581fd046c005f972fcdc142f7 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2bc16aa581fd046c005f972fcdc142f7 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2bc16aa581fd046c005f972fcdc142f7 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Mar 17 11:50:53.107: INFO: stdout: "Created e2e-test-nginx-rc-2bc16aa581fd046c005f972fcdc142f7\nScaling up e2e-test-nginx-rc-2bc16aa581fd046c005f972fcdc142f7 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2bc16aa581fd046c005f972fcdc142f7 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2bc16aa581fd046c005f972fcdc142f7 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Mar 17 11:50:53.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pmx7p' Mar 17 11:50:53.213: INFO: stderr: "" Mar 17 11:50:53.213: INFO: stdout: "e2e-test-nginx-rc-2bc16aa581fd046c005f972fcdc142f7-zjkkw " Mar 17 11:50:53.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2bc16aa581fd046c005f972fcdc142f7-zjkkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmx7p' Mar 17 11:50:53.316: INFO: stderr: "" Mar 17 11:50:53.317: INFO: stdout: "true" Mar 17 11:50:53.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2bc16aa581fd046c005f972fcdc142f7-zjkkw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmx7p' Mar 17 11:50:53.408: INFO: stderr: "" Mar 17 11:50:53.408: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Mar 17 11:50:53.408: INFO: e2e-test-nginx-rc-2bc16aa581fd046c005f972fcdc142f7-zjkkw is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Mar 17 11:50:53.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pmx7p' Mar 17 11:50:53.513: INFO: stderr: "" Mar 17 11:50:53.513: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:50:53.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pmx7p" for this suite. Mar 17 11:50:59.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:50:59.581: INFO: namespace: e2e-tests-kubectl-pmx7p, resource: bindings, ignored listing per whitelist Mar 17 11:50:59.635: INFO: namespace e2e-tests-kubectl-pmx7p deletion completed in 6.093558148s • [SLOW TEST:24.535 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:50:59.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 17 11:50:59.744: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:51:05.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-2m64k" for this suite. Mar 17 11:51:27.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:51:27.949: INFO: namespace: e2e-tests-init-container-2m64k, resource: bindings, ignored listing per whitelist Mar 17 11:51:27.999: INFO: namespace e2e-tests-init-container-2m64k deletion completed in 22.084631166s • [SLOW TEST:28.364 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:51:27.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 11:51:28.097: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2c0e014-6845-11ea-b08f-0242ac11000f" in namespace "e2e-tests-downward-api-lclzx" to be "success or failure" Mar 17 11:51:28.108: INFO: Pod "downwardapi-volume-a2c0e014-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.526192ms Mar 17 11:51:30.112: INFO: Pod "downwardapi-volume-a2c0e014-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014416544s Mar 17 11:51:32.116: INFO: Pod "downwardapi-volume-a2c0e014-6845-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01843458s STEP: Saw pod success Mar 17 11:51:32.116: INFO: Pod "downwardapi-volume-a2c0e014-6845-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:51:32.119: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-a2c0e014-6845-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 11:51:32.152: INFO: Waiting for pod downwardapi-volume-a2c0e014-6845-11ea-b08f-0242ac11000f to disappear Mar 17 11:51:32.162: INFO: Pod downwardapi-volume-a2c0e014-6845-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:51:32.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lclzx" for this suite. Mar 17 11:51:38.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:51:38.199: INFO: namespace: e2e-tests-downward-api-lclzx, resource: bindings, ignored listing per whitelist Mar 17 11:51:38.258: INFO: namespace e2e-tests-downward-api-lclzx deletion completed in 6.093576742s • [SLOW TEST:10.259 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:51:38.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-a8e294cd-6845-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 11:51:38.380: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a8e34c31-6845-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-q8hp9" to be "success or failure" Mar 17 11:51:38.395: INFO: Pod "pod-projected-configmaps-a8e34c31-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.136564ms Mar 17 11:51:40.399: INFO: Pod "pod-projected-configmaps-a8e34c31-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018640664s Mar 17 11:51:42.403: INFO: Pod "pod-projected-configmaps-a8e34c31-6845-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022900031s STEP: Saw pod success Mar 17 11:51:42.403: INFO: Pod "pod-projected-configmaps-a8e34c31-6845-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:51:42.406: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-a8e34c31-6845-11ea-b08f-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 17 11:51:42.428: INFO: Waiting for pod pod-projected-configmaps-a8e34c31-6845-11ea-b08f-0242ac11000f to disappear Mar 17 11:51:42.432: INFO: Pod pod-projected-configmaps-a8e34c31-6845-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:51:42.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q8hp9" for this suite. Mar 17 11:51:48.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:51:48.598: INFO: namespace: e2e-tests-projected-q8hp9, resource: bindings, ignored listing per whitelist Mar 17 11:51:48.606: INFO: namespace e2e-tests-projected-q8hp9 deletion completed in 6.170324671s • [SLOW TEST:10.347 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:51:48.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-af0e5daf-6845-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume secrets Mar 17 11:51:48.733: INFO: Waiting up to 5m0s for pod "pod-secrets-af0edd0b-6845-11ea-b08f-0242ac11000f" in namespace "e2e-tests-secrets-s4txk" to be "success or failure" Mar 17 11:51:48.737: INFO: Pod "pod-secrets-af0edd0b-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476033ms Mar 17 11:51:50.741: INFO: Pod "pod-secrets-af0edd0b-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008529911s Mar 17 11:51:52.745: INFO: Pod "pod-secrets-af0edd0b-6845-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012721075s STEP: Saw pod success Mar 17 11:51:52.745: INFO: Pod "pod-secrets-af0edd0b-6845-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:51:52.748: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-af0edd0b-6845-11ea-b08f-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 17 11:51:52.822: INFO: Waiting for pod pod-secrets-af0edd0b-6845-11ea-b08f-0242ac11000f to disappear Mar 17 11:51:52.827: INFO: Pod pod-secrets-af0edd0b-6845-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:51:52.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-s4txk" for this suite. Mar 17 11:51:58.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:51:58.853: INFO: namespace: e2e-tests-secrets-s4txk, resource: bindings, ignored listing per whitelist Mar 17 11:51:58.927: INFO: namespace e2e-tests-secrets-s4txk deletion completed in 6.096915535s • [SLOW TEST:10.321 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:51:58.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 11:51:59.022: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b530bfba-6845-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-rqkmf" to be "success or failure" Mar 17 11:51:59.025: INFO: Pod "downwardapi-volume-b530bfba-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.788548ms Mar 17 11:52:01.028: INFO: Pod "downwardapi-volume-b530bfba-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006430873s Mar 17 11:52:03.032: INFO: Pod "downwardapi-volume-b530bfba-6845-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010542323s STEP: Saw pod success Mar 17 11:52:03.032: INFO: Pod "downwardapi-volume-b530bfba-6845-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:52:03.035: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b530bfba-6845-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 11:52:03.077: INFO: Waiting for pod downwardapi-volume-b530bfba-6845-11ea-b08f-0242ac11000f to disappear Mar 17 11:52:03.097: INFO: Pod downwardapi-volume-b530bfba-6845-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:52:03.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rqkmf" for this suite. Mar 17 11:52:09.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:52:09.192: INFO: namespace: e2e-tests-projected-rqkmf, resource: bindings, ignored listing per whitelist Mar 17 11:52:09.215: INFO: namespace e2e-tests-projected-rqkmf deletion completed in 6.113262579s • [SLOW TEST:10.287 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:52:09.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Mar 17 11:52:09.333: INFO: Waiting up to 5m0s for pod "client-containers-bb5550af-6845-11ea-b08f-0242ac11000f" in namespace "e2e-tests-containers-xhklh" to be "success or failure" Mar 17 11:52:09.343: INFO: Pod "client-containers-bb5550af-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.634616ms Mar 17 11:52:11.360: INFO: Pod "client-containers-bb5550af-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02733974s Mar 17 11:52:13.364: INFO: Pod "client-containers-bb5550af-6845-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031286264s STEP: Saw pod success Mar 17 11:52:13.364: INFO: Pod "client-containers-bb5550af-6845-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:52:13.367: INFO: Trying to get logs from node hunter-worker pod client-containers-bb5550af-6845-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 11:52:13.415: INFO: Waiting for pod client-containers-bb5550af-6845-11ea-b08f-0242ac11000f to disappear Mar 17 11:52:13.420: INFO: Pod client-containers-bb5550af-6845-11ea-b08f-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:52:13.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-xhklh" for this suite. Mar 17 11:52:19.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:52:19.448: INFO: namespace: e2e-tests-containers-xhklh, resource: bindings, ignored listing per whitelist Mar 17 11:52:19.510: INFO: namespace e2e-tests-containers-xhklh deletion completed in 6.086999133s • [SLOW TEST:10.294 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:52:19.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0317 11:52:20.676444 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 17 11:52:20.676: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:52:20.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-tpz6m" for this suite. Mar 17 11:52:26.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:52:26.736: INFO: namespace: e2e-tests-gc-tpz6m, resource: bindings, ignored listing per whitelist Mar 17 11:52:26.770: INFO: namespace e2e-tests-gc-tpz6m deletion completed in 6.090349326s • [SLOW TEST:7.259 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:52:26.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 17 11:52:26.857: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 17 11:52:26.873: INFO: Waiting for terminating namespaces to be deleted... Mar 17 11:52:26.876: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 17 11:52:26.882: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Mar 17 11:52:26.882: INFO: Container kube-proxy ready: true, restart count 0 Mar 17 11:52:26.882: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 17 11:52:26.882: INFO: Container kindnet-cni ready: true, restart count 0 Mar 17 11:52:26.882: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 17 11:52:26.882: INFO: Container coredns ready: true, restart count 0 Mar 17 11:52:26.882: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 17 11:52:26.888: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 17 11:52:26.888: INFO: Container kindnet-cni ready: true, restart count 0 Mar 17 11:52:26.888: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 17 11:52:26.888: INFO: Container coredns ready: true, restart count 0 Mar 17 11:52:26.888: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 17 11:52:26.888: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Mar 17 11:52:26.945: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker Mar 17 11:52:26.945: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 Mar 17 11:52:26.945: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker Mar 17 11:52:26.945: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 Mar 17 11:52:26.945: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 Mar 17 11:52:26.945: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-c5d6eed7-6845-11ea-b08f-0242ac11000f.15fd1531aabe7153], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-dhdq5/filler-pod-c5d6eed7-6845-11ea-b08f-0242ac11000f to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-c5d6eed7-6845-11ea-b08f-0242ac11000f.15fd1531f5a3ea96], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c5d6eed7-6845-11ea-b08f-0242ac11000f.15fd153227bd26aa], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-c5d6eed7-6845-11ea-b08f-0242ac11000f.15fd153236c8c6fe], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-c5d88dd3-6845-11ea-b08f-0242ac11000f.15fd1531acd76bd7], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-dhdq5/filler-pod-c5d88dd3-6845-11ea-b08f-0242ac11000f to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-c5d88dd3-6845-11ea-b08f-0242ac11000f.15fd15321be35fc0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c5d88dd3-6845-11ea-b08f-0242ac11000f.15fd153249b8aa89], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-c5d88dd3-6845-11ea-b08f-0242ac11000f.15fd1532580c1dda], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fd15329c498526], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:52:32.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-dhdq5" for this suite. Mar 17 11:52:38.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:52:38.186: INFO: namespace: e2e-tests-sched-pred-dhdq5, resource: bindings, ignored listing per whitelist Mar 17 11:52:38.249: INFO: namespace e2e-tests-sched-pred-dhdq5 deletion completed in 6.096677967s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:11.479 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:52:38.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 11:52:38.376: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cca4be30-6845-11ea-b08f-0242ac11000f" in namespace "e2e-tests-downward-api-8jprm" to be "success or failure" Mar 17 11:52:38.379: INFO: Pod "downwardapi-volume-cca4be30-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.690398ms Mar 17 11:52:40.384: INFO: Pod "downwardapi-volume-cca4be30-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008762548s Mar 17 11:52:42.389: INFO: Pod "downwardapi-volume-cca4be30-6845-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012969439s STEP: Saw pod success Mar 17 11:52:42.389: INFO: Pod "downwardapi-volume-cca4be30-6845-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:52:42.391: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-cca4be30-6845-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 11:52:42.411: INFO: Waiting for pod downwardapi-volume-cca4be30-6845-11ea-b08f-0242ac11000f to disappear Mar 17 11:52:42.429: INFO: Pod downwardapi-volume-cca4be30-6845-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:52:42.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8jprm" for this suite. Mar 17 11:52:48.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:52:48.514: INFO: namespace: e2e-tests-downward-api-8jprm, resource: bindings, ignored listing per whitelist Mar 17 11:52:48.544: INFO: namespace e2e-tests-downward-api-8jprm deletion completed in 6.111938816s • [SLOW TEST:10.295 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:52:48.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0317 11:52:58.673782 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 17 11:52:58.673: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:52:58.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-rc97g" for this suite. Mar 17 11:53:04.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:53:04.772: INFO: namespace: e2e-tests-gc-rc97g, resource: bindings, ignored listing per whitelist Mar 17 11:53:04.776: INFO: namespace e2e-tests-gc-rc97g deletion completed in 6.099445105s • [SLOW TEST:16.231 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:53:04.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 17 11:53:04.885: INFO: Waiting up to 5m0s for pod "pod-dc7207c3-6845-11ea-b08f-0242ac11000f" in namespace "e2e-tests-emptydir-qdfdx" to be "success or failure" Mar 17 11:53:04.895: INFO: Pod "pod-dc7207c3-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.737139ms Mar 17 11:53:06.899: INFO: Pod "pod-dc7207c3-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013822753s Mar 17 11:53:08.903: INFO: Pod "pod-dc7207c3-6845-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017992375s STEP: Saw pod success Mar 17 11:53:08.904: INFO: Pod "pod-dc7207c3-6845-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:53:08.907: INFO: Trying to get logs from node hunter-worker pod pod-dc7207c3-6845-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 11:53:08.938: INFO: Waiting for pod pod-dc7207c3-6845-11ea-b08f-0242ac11000f to disappear Mar 17 11:53:08.954: INFO: Pod pod-dc7207c3-6845-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:53:08.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qdfdx" for this suite. Mar 17 11:53:14.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:53:15.036: INFO: namespace: e2e-tests-emptydir-qdfdx, resource: bindings, ignored listing per whitelist Mar 17 11:53:15.066: INFO: namespace e2e-tests-emptydir-qdfdx deletion completed in 6.108394787s • [SLOW TEST:10.289 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:53:15.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:53:15.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Mar 17 11:53:15.228: INFO: stderr: "" Mar 17 11:53:15.228: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Mar 17 11:53:15.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-c6zxx' Mar 17 11:53:15.608: INFO: stderr: "" Mar 17 11:53:15.608: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 17 11:53:15.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-c6zxx' Mar 17 11:53:15.954: INFO: stderr: "" Mar 17 11:53:15.954: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 17 11:53:16.959: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:53:16.959: INFO: Found 0 / 1 Mar 17 11:53:17.984: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:53:17.984: INFO: Found 0 / 1 Mar 17 11:53:18.958: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:53:18.958: INFO: Found 1 / 1 Mar 17 11:53:18.958: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 17 11:53:18.961: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:53:18.961: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 17 11:53:18.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-jwx7n --namespace=e2e-tests-kubectl-c6zxx' Mar 17 11:53:19.069: INFO: stderr: "" Mar 17 11:53:19.070: INFO: stdout: "Name: redis-master-jwx7n\nNamespace: e2e-tests-kubectl-c6zxx\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Tue, 17 Mar 2020 11:53:15 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.97\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://c7b4f45a8d3316ea8469f38843fa1394c355bb7a51182e411f5912743f165cf7\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 17 Mar 2020 11:53:18 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-pjffm (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-pjffm:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-pjffm\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-tests-kubectl-c6zxx/redis-master-jwx7n to hunter-worker2\n Normal Pulled 2s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker2 Created container\n Normal Started 1s kubelet, hunter-worker2 Started container\n" Mar 17 11:53:19.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-c6zxx' Mar 17 11:53:19.206: INFO: stderr: "" Mar 17 11:53:19.206: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-c6zxx\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-jwx7n\n" Mar 17 11:53:19.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-c6zxx' Mar 17 11:53:19.323: INFO: stderr: "" Mar 17 11:53:19.323: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-c6zxx\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.56.195\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.97:6379\nSession Affinity: None\nEvents: \n" Mar 17 11:53:19.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Mar 17 11:53:19.460: INFO: stderr: "" Mar 17 11:53:19.460: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 17 Mar 2020 11:53:12 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 17 Mar 2020 11:53:12 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 17 Mar 2020 11:53:12 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 17 Mar 2020 11:53:12 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 41h\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 41h\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 41h\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 41h\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 41h\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 41h\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 41h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 17 11:53:19.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-c6zxx' Mar 17 11:53:19.567: INFO: stderr: "" Mar 17 11:53:19.567: INFO: stdout: "Name: e2e-tests-kubectl-c6zxx\nLabels: e2e-framework=kubectl\n e2e-run=97158376-683c-11ea-b08f-0242ac11000f\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:53:19.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-c6zxx" for this suite. Mar 17 11:53:41.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:53:41.651: INFO: namespace: e2e-tests-kubectl-c6zxx, resource: bindings, ignored listing per whitelist Mar 17 11:53:41.665: INFO: namespace e2e-tests-kubectl-c6zxx deletion completed in 22.09436609s • [SLOW TEST:26.599 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:53:41.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 17 11:53:41.788: INFO: Waiting up to 5m0s for pod "pod-f26e4342-6845-11ea-b08f-0242ac11000f" in namespace "e2e-tests-emptydir-wj4zm" to be "success or failure" Mar 17 11:53:41.794: INFO: Pod "pod-f26e4342-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.376356ms Mar 17 11:53:43.798: INFO: Pod "pod-f26e4342-6845-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009153349s Mar 17 11:53:45.802: INFO: Pod "pod-f26e4342-6845-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013454302s STEP: Saw pod success Mar 17 11:53:45.802: INFO: Pod "pod-f26e4342-6845-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:53:45.806: INFO: Trying to get logs from node hunter-worker pod pod-f26e4342-6845-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 11:53:45.825: INFO: Waiting for pod pod-f26e4342-6845-11ea-b08f-0242ac11000f to disappear Mar 17 11:53:45.852: INFO: Pod pod-f26e4342-6845-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:53:45.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wj4zm" for this suite. Mar 17 11:53:51.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:53:51.945: INFO: namespace: e2e-tests-emptydir-wj4zm, resource: bindings, ignored listing per whitelist Mar 17 11:53:51.956: INFO: namespace e2e-tests-emptydir-wj4zm deletion completed in 6.100625119s • [SLOW TEST:10.290 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:53:51.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Mar 17 11:53:56.198: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:54:20.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-97rvc" for this suite. Mar 17 11:54:26.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:54:26.391: INFO: namespace: e2e-tests-namespaces-97rvc, resource: bindings, ignored listing per whitelist Mar 17 11:54:26.407: INFO: namespace e2e-tests-namespaces-97rvc deletion completed in 6.096327679s STEP: Destroying namespace "e2e-tests-nsdeletetest-9hkc4" for this suite. Mar 17 11:54:26.410: INFO: Namespace e2e-tests-nsdeletetest-9hkc4 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-ftkqx" for this suite. Mar 17 11:54:32.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:54:32.437: INFO: namespace: e2e-tests-nsdeletetest-ftkqx, resource: bindings, ignored listing per whitelist Mar 17 11:54:32.517: INFO: namespace e2e-tests-nsdeletetest-ftkqx deletion completed in 6.107199924s • [SLOW TEST:40.561 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:54:32.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 17 11:54:32.636: INFO: Waiting up to 5m0s for pod "pod-10be6f13-6846-11ea-b08f-0242ac11000f" in namespace "e2e-tests-emptydir-sjhcc" to be "success or failure" Mar 17 11:54:32.649: INFO: Pod "pod-10be6f13-6846-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.862027ms Mar 17 11:54:34.652: INFO: Pod "pod-10be6f13-6846-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016450481s Mar 17 11:54:36.656: INFO: Pod "pod-10be6f13-6846-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020734552s STEP: Saw pod success Mar 17 11:54:36.657: INFO: Pod "pod-10be6f13-6846-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:54:36.660: INFO: Trying to get logs from node hunter-worker2 pod pod-10be6f13-6846-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 11:54:36.688: INFO: Waiting for pod pod-10be6f13-6846-11ea-b08f-0242ac11000f to disappear Mar 17 11:54:36.694: INFO: Pod pod-10be6f13-6846-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:54:36.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sjhcc" for this suite. Mar 17 11:54:42.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:54:42.799: INFO: namespace: e2e-tests-emptydir-sjhcc, resource: bindings, ignored listing per whitelist Mar 17 11:54:42.825: INFO: namespace e2e-tests-emptydir-sjhcc deletion completed in 6.128871723s • [SLOW TEST:10.308 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:54:42.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 17 11:54:52.987: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rdmc6 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:54:52.988: INFO: >>> kubeConfig: /root/.kube/config I0317 11:54:53.024996 6 log.go:172] (0xc000d6a6e0) (0xc00269d900) Create stream I0317 11:54:53.025026 6 log.go:172] (0xc000d6a6e0) (0xc00269d900) Stream added, broadcasting: 1 I0317 11:54:53.027069 6 log.go:172] (0xc000d6a6e0) Reply frame received for 1 I0317 11:54:53.027100 6 log.go:172] (0xc000d6a6e0) (0xc00269da40) Create stream I0317 11:54:53.027111 6 log.go:172] (0xc000d6a6e0) (0xc00269da40) Stream added, broadcasting: 3 I0317 11:54:53.028055 6 log.go:172] (0xc000d6a6e0) Reply frame received for 3 I0317 11:54:53.028089 6 log.go:172] (0xc000d6a6e0) (0xc00269db80) Create stream I0317 11:54:53.028102 6 log.go:172] (0xc000d6a6e0) (0xc00269db80) Stream added, broadcasting: 5 I0317 11:54:53.029040 6 log.go:172] (0xc000d6a6e0) Reply frame received for 5 I0317 11:54:53.104577 6 log.go:172] (0xc000d6a6e0) Data frame received for 3 I0317 11:54:53.104636 6 log.go:172] (0xc00269da40) (3) Data frame handling I0317 11:54:53.104664 6 log.go:172] (0xc00269da40) (3) Data frame sent I0317 11:54:53.104679 6 log.go:172] (0xc000d6a6e0) Data frame received for 3 I0317 11:54:53.104692 6 log.go:172] (0xc00269da40) (3) Data frame handling I0317 11:54:53.104714 6 log.go:172] (0xc000d6a6e0) Data frame received for 5 I0317 11:54:53.104737 6 log.go:172] (0xc00269db80) (5) Data frame handling I0317 11:54:53.106418 6 log.go:172] (0xc000d6a6e0) Data frame received for 1 I0317 11:54:53.106442 6 log.go:172] (0xc00269d900) (1) Data frame handling I0317 11:54:53.106455 6 log.go:172] (0xc00269d900) (1) Data frame sent I0317 11:54:53.106472 6 log.go:172] (0xc000d6a6e0) (0xc00269d900) Stream removed, broadcasting: 1 I0317 11:54:53.106489 6 log.go:172] (0xc000d6a6e0) Go away received I0317 11:54:53.106639 6 log.go:172] (0xc000d6a6e0) (0xc00269d900) Stream removed, broadcasting: 1 I0317 11:54:53.106657 6 log.go:172] (0xc000d6a6e0) (0xc00269da40) Stream removed, broadcasting: 3 I0317 11:54:53.106665 6 log.go:172] (0xc000d6a6e0) (0xc00269db80) Stream removed, broadcasting: 5 Mar 17 11:54:53.106: INFO: Exec stderr: "" Mar 17 11:54:53.106: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rdmc6 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:54:53.106: INFO: >>> kubeConfig: /root/.kube/config I0317 11:54:53.147337 6 log.go:172] (0xc000302840) (0xc00270b220) Create stream I0317 11:54:53.147375 6 log.go:172] (0xc000302840) (0xc00270b220) Stream added, broadcasting: 1 I0317 11:54:53.150263 6 log.go:172] (0xc000302840) Reply frame received for 1 I0317 11:54:53.150314 6 log.go:172] (0xc000302840) (0xc002694780) Create stream I0317 11:54:53.150331 6 log.go:172] (0xc000302840) (0xc002694780) Stream added, broadcasting: 3 I0317 11:54:53.151204 6 log.go:172] (0xc000302840) Reply frame received for 3 I0317 11:54:53.151233 6 log.go:172] (0xc000302840) (0xc00269dcc0) Create stream I0317 11:54:53.151244 6 log.go:172] (0xc000302840) (0xc00269dcc0) Stream added, broadcasting: 5 I0317 11:54:53.152116 6 log.go:172] (0xc000302840) Reply frame received for 5 I0317 11:54:53.213426 6 log.go:172] (0xc000302840) Data frame received for 3 I0317 11:54:53.213475 6 log.go:172] (0xc002694780) (3) Data frame handling I0317 11:54:53.213570 6 log.go:172] (0xc002694780) (3) Data frame sent I0317 11:54:53.213608 6 log.go:172] (0xc000302840) Data frame received for 3 I0317 11:54:53.213631 6 log.go:172] (0xc002694780) (3) Data frame handling I0317 11:54:53.213669 6 log.go:172] (0xc000302840) Data frame received for 5 I0317 11:54:53.213693 6 log.go:172] (0xc00269dcc0) (5) Data frame handling I0317 11:54:53.215035 6 log.go:172] (0xc000302840) Data frame received for 1 I0317 11:54:53.215069 6 log.go:172] (0xc00270b220) (1) Data frame handling I0317 11:54:53.215096 6 log.go:172] (0xc00270b220) (1) Data frame sent I0317 11:54:53.215126 6 log.go:172] (0xc000302840) (0xc00270b220) Stream removed, broadcasting: 1 I0317 11:54:53.215158 6 log.go:172] (0xc000302840) Go away received I0317 11:54:53.215330 6 log.go:172] (0xc000302840) (0xc00270b220) Stream removed, broadcasting: 1 I0317 11:54:53.215393 6 log.go:172] (0xc000302840) (0xc002694780) Stream removed, broadcasting: 3 I0317 11:54:53.215433 6 log.go:172] (0xc000302840) (0xc00269dcc0) Stream removed, broadcasting: 5 Mar 17 11:54:53.215: INFO: Exec stderr: "" Mar 17 11:54:53.215: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rdmc6 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:54:53.215: INFO: >>> kubeConfig: /root/.kube/config I0317 11:54:53.247810 6 log.go:172] (0xc000302d10) (0xc00270b4a0) Create stream I0317 11:54:53.247831 6 log.go:172] (0xc000302d10) (0xc00270b4a0) Stream added, broadcasting: 1 I0317 11:54:53.250406 6 log.go:172] (0xc000302d10) Reply frame received for 1 I0317 11:54:53.250438 6 log.go:172] (0xc000302d10) (0xc00269dea0) Create stream I0317 11:54:53.250450 6 log.go:172] (0xc000302d10) (0xc00269dea0) Stream added, broadcasting: 3 I0317 11:54:53.251561 6 log.go:172] (0xc000302d10) Reply frame received for 3 I0317 11:54:53.251607 6 log.go:172] (0xc000302d10) (0xc0022f3680) Create stream I0317 11:54:53.251626 6 log.go:172] (0xc000302d10) (0xc0022f3680) Stream added, broadcasting: 5 I0317 11:54:53.252908 6 log.go:172] (0xc000302d10) Reply frame received for 5 I0317 11:54:53.315502 6 log.go:172] (0xc000302d10) Data frame received for 3 I0317 11:54:53.315532 6 log.go:172] (0xc00269dea0) (3) Data frame handling I0317 11:54:53.315552 6 log.go:172] (0xc00269dea0) (3) Data frame sent I0317 11:54:53.315683 6 log.go:172] (0xc000302d10) Data frame received for 5 I0317 11:54:53.315704 6 log.go:172] (0xc0022f3680) (5) Data frame handling I0317 11:54:53.315718 6 log.go:172] (0xc000302d10) Data frame received for 3 I0317 11:54:53.315733 6 log.go:172] (0xc00269dea0) (3) Data frame handling I0317 11:54:53.317367 6 log.go:172] (0xc000302d10) Data frame received for 1 I0317 11:54:53.317386 6 log.go:172] (0xc00270b4a0) (1) Data frame handling I0317 11:54:53.317398 6 log.go:172] (0xc00270b4a0) (1) Data frame sent I0317 11:54:53.317415 6 log.go:172] (0xc000302d10) (0xc00270b4a0) Stream removed, broadcasting: 1 I0317 11:54:53.317447 6 log.go:172] (0xc000302d10) Go away received I0317 11:54:53.317504 6 log.go:172] (0xc000302d10) (0xc00270b4a0) Stream removed, broadcasting: 1 I0317 11:54:53.317520 6 log.go:172] (0xc000302d10) (0xc00269dea0) Stream removed, broadcasting: 3 I0317 11:54:53.317531 6 log.go:172] (0xc000302d10) (0xc0022f3680) Stream removed, broadcasting: 5 Mar 17 11:54:53.317: INFO: Exec stderr: "" Mar 17 11:54:53.317: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rdmc6 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:54:53.317: INFO: >>> kubeConfig: /root/.kube/config I0317 11:54:53.353405 6 log.go:172] (0xc000d6abb0) (0xc0026a61e0) Create stream I0317 11:54:53.353441 6 log.go:172] (0xc000d6abb0) (0xc0026a61e0) Stream added, broadcasting: 1 I0317 11:54:53.355825 6 log.go:172] (0xc000d6abb0) Reply frame received for 1 I0317 11:54:53.355880 6 log.go:172] (0xc000d6abb0) (0xc001859220) Create stream I0317 11:54:53.355897 6 log.go:172] (0xc000d6abb0) (0xc001859220) Stream added, broadcasting: 3 I0317 11:54:53.357006 6 log.go:172] (0xc000d6abb0) Reply frame received for 3 I0317 11:54:53.357038 6 log.go:172] (0xc000d6abb0) (0xc0022f3720) Create stream I0317 11:54:53.357044 6 log.go:172] (0xc000d6abb0) (0xc0022f3720) Stream added, broadcasting: 5 I0317 11:54:53.358288 6 log.go:172] (0xc000d6abb0) Reply frame received for 5 I0317 11:54:53.428655 6 log.go:172] (0xc000d6abb0) Data frame received for 5 I0317 11:54:53.428737 6 log.go:172] (0xc0022f3720) (5) Data frame handling I0317 11:54:53.428787 6 log.go:172] (0xc000d6abb0) Data frame received for 3 I0317 11:54:53.428816 6 log.go:172] (0xc001859220) (3) Data frame handling I0317 11:54:53.428855 6 log.go:172] (0xc001859220) (3) Data frame sent I0317 11:54:53.428879 6 log.go:172] (0xc000d6abb0) Data frame received for 3 I0317 11:54:53.428898 6 log.go:172] (0xc001859220) (3) Data frame handling I0317 11:54:53.430458 6 log.go:172] (0xc000d6abb0) Data frame received for 1 I0317 11:54:53.430503 6 log.go:172] (0xc0026a61e0) (1) Data frame handling I0317 11:54:53.430531 6 log.go:172] (0xc0026a61e0) (1) Data frame sent I0317 11:54:53.430553 6 log.go:172] (0xc000d6abb0) (0xc0026a61e0) Stream removed, broadcasting: 1 I0317 11:54:53.430578 6 log.go:172] (0xc000d6abb0) Go away received I0317 11:54:53.430689 6 log.go:172] (0xc000d6abb0) (0xc0026a61e0) Stream removed, broadcasting: 1 I0317 11:54:53.430716 6 log.go:172] (0xc000d6abb0) (0xc001859220) Stream removed, broadcasting: 3 I0317 11:54:53.430725 6 log.go:172] (0xc000d6abb0) (0xc0022f3720) Stream removed, broadcasting: 5 Mar 17 11:54:53.430: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 17 11:54:53.430: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rdmc6 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:54:53.430: INFO: >>> kubeConfig: /root/.kube/config I0317 11:54:53.464678 6 log.go:172] (0xc000ea6b00) (0xc0018595e0) Create stream I0317 11:54:53.464714 6 log.go:172] (0xc000ea6b00) (0xc0018595e0) Stream added, broadcasting: 1 I0317 11:54:53.467502 6 log.go:172] (0xc000ea6b00) Reply frame received for 1 I0317 11:54:53.467549 6 log.go:172] (0xc000ea6b00) (0xc0026a6280) Create stream I0317 11:54:53.467569 6 log.go:172] (0xc000ea6b00) (0xc0026a6280) Stream added, broadcasting: 3 I0317 11:54:53.468710 6 log.go:172] (0xc000ea6b00) Reply frame received for 3 I0317 11:54:53.468766 6 log.go:172] (0xc000ea6b00) (0xc00270b5e0) Create stream I0317 11:54:53.468786 6 log.go:172] (0xc000ea6b00) (0xc00270b5e0) Stream added, broadcasting: 5 I0317 11:54:53.470135 6 log.go:172] (0xc000ea6b00) Reply frame received for 5 I0317 11:54:53.520432 6 log.go:172] (0xc000ea6b00) Data frame received for 3 I0317 11:54:53.520525 6 log.go:172] (0xc0026a6280) (3) Data frame handling I0317 11:54:53.520554 6 log.go:172] (0xc0026a6280) (3) Data frame sent I0317 11:54:53.520603 6 log.go:172] (0xc000ea6b00) Data frame received for 3 I0317 11:54:53.520649 6 log.go:172] (0xc000ea6b00) Data frame received for 5 I0317 11:54:53.520708 6 log.go:172] (0xc00270b5e0) (5) Data frame handling I0317 11:54:53.520776 6 log.go:172] (0xc0026a6280) (3) Data frame handling I0317 11:54:53.522153 6 log.go:172] (0xc000ea6b00) Data frame received for 1 I0317 11:54:53.522264 6 log.go:172] (0xc0018595e0) (1) Data frame handling I0317 11:54:53.522303 6 log.go:172] (0xc0018595e0) (1) Data frame sent I0317 11:54:53.522327 6 log.go:172] (0xc000ea6b00) (0xc0018595e0) Stream removed, broadcasting: 1 I0317 11:54:53.522345 6 log.go:172] (0xc000ea6b00) Go away received I0317 11:54:53.522507 6 log.go:172] (0xc000ea6b00) (0xc0018595e0) Stream removed, broadcasting: 1 I0317 11:54:53.522538 6 log.go:172] (0xc000ea6b00) (0xc0026a6280) Stream removed, broadcasting: 3 I0317 11:54:53.522560 6 log.go:172] (0xc000ea6b00) (0xc00270b5e0) Stream removed, broadcasting: 5 Mar 17 11:54:53.522: INFO: Exec stderr: "" Mar 17 11:54:53.522: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rdmc6 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:54:53.522: INFO: >>> kubeConfig: /root/.kube/config I0317 11:54:53.552888 6 log.go:172] (0xc0010822c0) (0xc002694a00) Create stream I0317 11:54:53.552919 6 log.go:172] (0xc0010822c0) (0xc002694a00) Stream added, broadcasting: 1 I0317 11:54:53.554647 6 log.go:172] (0xc0010822c0) Reply frame received for 1 I0317 11:54:53.554687 6 log.go:172] (0xc0010822c0) (0xc002694aa0) Create stream I0317 11:54:53.554702 6 log.go:172] (0xc0010822c0) (0xc002694aa0) Stream added, broadcasting: 3 I0317 11:54:53.555432 6 log.go:172] (0xc0010822c0) Reply frame received for 3 I0317 11:54:53.555460 6 log.go:172] (0xc0010822c0) (0xc002694b40) Create stream I0317 11:54:53.555469 6 log.go:172] (0xc0010822c0) (0xc002694b40) Stream added, broadcasting: 5 I0317 11:54:53.556058 6 log.go:172] (0xc0010822c0) Reply frame received for 5 I0317 11:54:53.604332 6 log.go:172] (0xc0010822c0) Data frame received for 5 I0317 11:54:53.604357 6 log.go:172] (0xc002694b40) (5) Data frame handling I0317 11:54:53.604388 6 log.go:172] (0xc0010822c0) Data frame received for 3 I0317 11:54:53.604420 6 log.go:172] (0xc002694aa0) (3) Data frame handling I0317 11:54:53.604441 6 log.go:172] (0xc002694aa0) (3) Data frame sent I0317 11:54:53.604454 6 log.go:172] (0xc0010822c0) Data frame received for 3 I0317 11:54:53.604464 6 log.go:172] (0xc002694aa0) (3) Data frame handling I0317 11:54:53.605907 6 log.go:172] (0xc0010822c0) Data frame received for 1 I0317 11:54:53.605921 6 log.go:172] (0xc002694a00) (1) Data frame handling I0317 11:54:53.605927 6 log.go:172] (0xc002694a00) (1) Data frame sent I0317 11:54:53.605937 6 log.go:172] (0xc0010822c0) (0xc002694a00) Stream removed, broadcasting: 1 I0317 11:54:53.605945 6 log.go:172] (0xc0010822c0) Go away received I0317 11:54:53.606044 6 log.go:172] (0xc0010822c0) (0xc002694a00) Stream removed, broadcasting: 1 I0317 11:54:53.606068 6 log.go:172] (0xc0010822c0) (0xc002694aa0) Stream removed, broadcasting: 3 I0317 11:54:53.606085 6 log.go:172] (0xc0010822c0) (0xc002694b40) Stream removed, broadcasting: 5 Mar 17 11:54:53.606: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 17 11:54:53.606: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rdmc6 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:54:53.606: INFO: >>> kubeConfig: /root/.kube/config I0317 11:54:53.643775 6 log.go:172] (0xc000ea7080) (0xc001859860) Create stream I0317 11:54:53.643800 6 log.go:172] (0xc000ea7080) (0xc001859860) Stream added, broadcasting: 1 I0317 11:54:53.645588 6 log.go:172] (0xc000ea7080) Reply frame received for 1 I0317 11:54:53.645636 6 log.go:172] (0xc000ea7080) (0xc00270b680) Create stream I0317 11:54:53.645648 6 log.go:172] (0xc000ea7080) (0xc00270b680) Stream added, broadcasting: 3 I0317 11:54:53.646537 6 log.go:172] (0xc000ea7080) Reply frame received for 3 I0317 11:54:53.646586 6 log.go:172] (0xc000ea7080) (0xc00270b720) Create stream I0317 11:54:53.646602 6 log.go:172] (0xc000ea7080) (0xc00270b720) Stream added, broadcasting: 5 I0317 11:54:53.647573 6 log.go:172] (0xc000ea7080) Reply frame received for 5 I0317 11:54:53.700266 6 log.go:172] (0xc000ea7080) Data frame received for 3 I0317 11:54:53.700361 6 log.go:172] (0xc00270b680) (3) Data frame handling I0317 11:54:53.700400 6 log.go:172] (0xc00270b680) (3) Data frame sent I0317 11:54:53.700512 6 log.go:172] (0xc000ea7080) Data frame received for 3 I0317 11:54:53.700543 6 log.go:172] (0xc00270b680) (3) Data frame handling I0317 11:54:53.700561 6 log.go:172] (0xc000ea7080) Data frame received for 5 I0317 11:54:53.700572 6 log.go:172] (0xc00270b720) (5) Data frame handling I0317 11:54:53.702275 6 log.go:172] (0xc000ea7080) Data frame received for 1 I0317 11:54:53.702304 6 log.go:172] (0xc001859860) (1) Data frame handling I0317 11:54:53.702317 6 log.go:172] (0xc001859860) (1) Data frame sent I0317 11:54:53.702346 6 log.go:172] (0xc000ea7080) (0xc001859860) Stream removed, broadcasting: 1 I0317 11:54:53.702387 6 log.go:172] (0xc000ea7080) Go away received I0317 11:54:53.702446 6 log.go:172] (0xc000ea7080) (0xc001859860) Stream removed, broadcasting: 1 I0317 11:54:53.702465 6 log.go:172] (0xc000ea7080) (0xc00270b680) Stream removed, broadcasting: 3 I0317 11:54:53.702481 6 log.go:172] (0xc000ea7080) (0xc00270b720) Stream removed, broadcasting: 5 Mar 17 11:54:53.702: INFO: Exec stderr: "" Mar 17 11:54:53.702: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rdmc6 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:54:53.702: INFO: >>> kubeConfig: /root/.kube/config I0317 11:54:53.737710 6 log.go:172] (0xc000d6b080) (0xc0026a6500) Create stream I0317 11:54:53.737735 6 log.go:172] (0xc000d6b080) (0xc0026a6500) Stream added, broadcasting: 1 I0317 11:54:53.740380 6 log.go:172] (0xc000d6b080) Reply frame received for 1 I0317 11:54:53.740431 6 log.go:172] (0xc000d6b080) (0xc0022f37c0) Create stream I0317 11:54:53.740445 6 log.go:172] (0xc000d6b080) (0xc0022f37c0) Stream added, broadcasting: 3 I0317 11:54:53.741404 6 log.go:172] (0xc000d6b080) Reply frame received for 3 I0317 11:54:53.741453 6 log.go:172] (0xc000d6b080) (0xc00270b7c0) Create stream I0317 11:54:53.741470 6 log.go:172] (0xc000d6b080) (0xc00270b7c0) Stream added, broadcasting: 5 I0317 11:54:53.742289 6 log.go:172] (0xc000d6b080) Reply frame received for 5 I0317 11:54:53.800176 6 log.go:172] (0xc000d6b080) Data frame received for 5 I0317 11:54:53.800218 6 log.go:172] (0xc00270b7c0) (5) Data frame handling I0317 11:54:53.800244 6 log.go:172] (0xc000d6b080) Data frame received for 3 I0317 11:54:53.800258 6 log.go:172] (0xc0022f37c0) (3) Data frame handling I0317 11:54:53.800274 6 log.go:172] (0xc0022f37c0) (3) Data frame sent I0317 11:54:53.800308 6 log.go:172] (0xc000d6b080) Data frame received for 3 I0317 11:54:53.800338 6 log.go:172] (0xc0022f37c0) (3) Data frame handling I0317 11:54:53.802036 6 log.go:172] (0xc000d6b080) Data frame received for 1 I0317 11:54:53.802082 6 log.go:172] (0xc0026a6500) (1) Data frame handling I0317 11:54:53.802118 6 log.go:172] (0xc0026a6500) (1) Data frame sent I0317 11:54:53.802149 6 log.go:172] (0xc000d6b080) (0xc0026a6500) Stream removed, broadcasting: 1 I0317 11:54:53.802191 6 log.go:172] (0xc000d6b080) Go away received I0317 11:54:53.802278 6 log.go:172] (0xc000d6b080) (0xc0026a6500) Stream removed, broadcasting: 1 I0317 11:54:53.802301 6 log.go:172] (0xc000d6b080) (0xc0022f37c0) Stream removed, broadcasting: 3 I0317 11:54:53.802316 6 log.go:172] (0xc000d6b080) (0xc00270b7c0) Stream removed, broadcasting: 5 Mar 17 11:54:53.802: INFO: Exec stderr: "" Mar 17 11:54:53.802: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rdmc6 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:54:53.802: INFO: >>> kubeConfig: /root/.kube/config I0317 11:54:53.839405 6 log.go:172] (0xc000ea7550) (0xc001859b80) Create stream I0317 11:54:53.839439 6 log.go:172] (0xc000ea7550) (0xc001859b80) Stream added, broadcasting: 1 I0317 11:54:53.842172 6 log.go:172] (0xc000ea7550) Reply frame received for 1 I0317 11:54:53.842216 6 log.go:172] (0xc000ea7550) (0xc0022f3860) Create stream I0317 11:54:53.842233 6 log.go:172] (0xc000ea7550) (0xc0022f3860) Stream added, broadcasting: 3 I0317 11:54:53.843110 6 log.go:172] (0xc000ea7550) Reply frame received for 3 I0317 11:54:53.843153 6 log.go:172] (0xc000ea7550) (0xc002694c80) Create stream I0317 11:54:53.843170 6 log.go:172] (0xc000ea7550) (0xc002694c80) Stream added, broadcasting: 5 I0317 11:54:53.844042 6 log.go:172] (0xc000ea7550) Reply frame received for 5 I0317 11:54:53.913016 6 log.go:172] (0xc000ea7550) Data frame received for 5 I0317 11:54:53.913071 6 log.go:172] (0xc002694c80) (5) Data frame handling I0317 11:54:53.913276 6 log.go:172] (0xc000ea7550) Data frame received for 3 I0317 11:54:53.913328 6 log.go:172] (0xc0022f3860) (3) Data frame handling I0317 11:54:53.913377 6 log.go:172] (0xc0022f3860) (3) Data frame sent I0317 11:54:53.913424 6 log.go:172] (0xc000ea7550) Data frame received for 3 I0317 11:54:53.913468 6 log.go:172] (0xc0022f3860) (3) Data frame handling I0317 11:54:53.917794 6 log.go:172] (0xc000ea7550) Data frame received for 1 I0317 11:54:53.917812 6 log.go:172] (0xc001859b80) (1) Data frame handling I0317 11:54:53.917821 6 log.go:172] (0xc001859b80) (1) Data frame sent I0317 11:54:53.917837 6 log.go:172] (0xc000ea7550) (0xc001859b80) Stream removed, broadcasting: 1 I0317 11:54:53.917852 6 log.go:172] (0xc000ea7550) Go away received I0317 11:54:53.917955 6 log.go:172] (0xc000ea7550) (0xc001859b80) Stream removed, broadcasting: 1 I0317 11:54:53.917985 6 log.go:172] (0xc000ea7550) (0xc0022f3860) Stream removed, broadcasting: 3 I0317 11:54:53.918006 6 log.go:172] (0xc000ea7550) (0xc002694c80) Stream removed, broadcasting: 5 Mar 17 11:54:53.918: INFO: Exec stderr: "" Mar 17 11:54:53.918: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rdmc6 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 11:54:53.918: INFO: >>> kubeConfig: /root/.kube/config I0317 11:54:53.949830 6 log.go:172] (0xc0003031e0) (0xc00270ba40) Create stream I0317 11:54:53.949875 6 log.go:172] (0xc0003031e0) (0xc00270ba40) Stream added, broadcasting: 1 I0317 11:54:53.951695 6 log.go:172] (0xc0003031e0) Reply frame received for 1 I0317 11:54:53.951743 6 log.go:172] (0xc0003031e0) (0xc00270bb80) Create stream I0317 11:54:53.951766 6 log.go:172] (0xc0003031e0) (0xc00270bb80) Stream added, broadcasting: 3 I0317 11:54:53.952749 6 log.go:172] (0xc0003031e0) Reply frame received for 3 I0317 11:54:53.952870 6 log.go:172] (0xc0003031e0) (0xc00270bc20) Create stream I0317 11:54:53.952883 6 log.go:172] (0xc0003031e0) (0xc00270bc20) Stream added, broadcasting: 5 I0317 11:54:53.953942 6 log.go:172] (0xc0003031e0) Reply frame received for 5 I0317 11:54:54.008458 6 log.go:172] (0xc0003031e0) Data frame received for 3 I0317 11:54:54.008482 6 log.go:172] (0xc00270bb80) (3) Data frame handling I0317 11:54:54.008492 6 log.go:172] (0xc00270bb80) (3) Data frame sent I0317 11:54:54.008506 6 log.go:172] (0xc0003031e0) Data frame received for 3 I0317 11:54:54.008512 6 log.go:172] (0xc00270bb80) (3) Data frame handling I0317 11:54:54.008635 6 log.go:172] (0xc0003031e0) Data frame received for 5 I0317 11:54:54.008647 6 log.go:172] (0xc00270bc20) (5) Data frame handling I0317 11:54:54.010476 6 log.go:172] (0xc0003031e0) Data frame received for 1 I0317 11:54:54.010495 6 log.go:172] (0xc00270ba40) (1) Data frame handling I0317 11:54:54.010505 6 log.go:172] (0xc00270ba40) (1) Data frame sent I0317 11:54:54.010516 6 log.go:172] (0xc0003031e0) (0xc00270ba40) Stream removed, broadcasting: 1 I0317 11:54:54.010542 6 log.go:172] (0xc0003031e0) Go away received I0317 11:54:54.010612 6 log.go:172] (0xc0003031e0) (0xc00270ba40) Stream removed, broadcasting: 1 I0317 11:54:54.010652 6 log.go:172] (0xc0003031e0) (0xc00270bb80) Stream removed, broadcasting: 3 I0317 11:54:54.010683 6 log.go:172] (0xc0003031e0) (0xc00270bc20) Stream removed, broadcasting: 5 Mar 17 11:54:54.010: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:54:54.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-rdmc6" for this suite. Mar 17 11:55:34.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:55:34.039: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-rdmc6, resource: bindings, ignored listing per whitelist Mar 17 11:55:34.100: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-rdmc6 deletion completed in 40.085146073s • [SLOW TEST:51.274 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:55:34.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 17 11:55:34.199: INFO: Waiting up to 5m0s for pod "downward-api-3572a65d-6846-11ea-b08f-0242ac11000f" in namespace "e2e-tests-downward-api-prn67" to be "success or failure" Mar 17 11:55:34.213: INFO: Pod "downward-api-3572a65d-6846-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.643718ms Mar 17 11:55:36.217: INFO: Pod "downward-api-3572a65d-6846-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017367824s Mar 17 11:55:38.221: INFO: Pod "downward-api-3572a65d-6846-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02162642s STEP: Saw pod success Mar 17 11:55:38.221: INFO: Pod "downward-api-3572a65d-6846-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:55:38.224: INFO: Trying to get logs from node hunter-worker pod downward-api-3572a65d-6846-11ea-b08f-0242ac11000f container dapi-container: STEP: delete the pod Mar 17 11:55:38.251: INFO: Waiting for pod downward-api-3572a65d-6846-11ea-b08f-0242ac11000f to disappear Mar 17 11:55:38.285: INFO: Pod downward-api-3572a65d-6846-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:55:38.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-prn67" for this suite. Mar 17 11:55:44.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:55:44.355: INFO: namespace: e2e-tests-downward-api-prn67, resource: bindings, ignored listing per whitelist Mar 17 11:55:44.400: INFO: namespace e2e-tests-downward-api-prn67 deletion completed in 6.111657093s • [SLOW TEST:10.300 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:55:44.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 17 11:55:44.493: INFO: Waiting up to 5m0s for pod "pod-3b957df6-6846-11ea-b08f-0242ac11000f" in namespace "e2e-tests-emptydir-pjm56" to be "success or failure" Mar 17 11:55:44.549: INFO: Pod "pod-3b957df6-6846-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 55.369388ms Mar 17 11:55:46.553: INFO: Pod "pod-3b957df6-6846-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05964153s Mar 17 11:55:48.557: INFO: Pod "pod-3b957df6-6846-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064026705s STEP: Saw pod success Mar 17 11:55:48.557: INFO: Pod "pod-3b957df6-6846-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:55:48.560: INFO: Trying to get logs from node hunter-worker2 pod pod-3b957df6-6846-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 11:55:48.579: INFO: Waiting for pod pod-3b957df6-6846-11ea-b08f-0242ac11000f to disappear Mar 17 11:55:48.581: INFO: Pod pod-3b957df6-6846-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:55:48.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-pjm56" for this suite. Mar 17 11:55:54.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:55:54.677: INFO: namespace: e2e-tests-emptydir-pjm56, resource: bindings, ignored listing per whitelist Mar 17 11:55:54.690: INFO: namespace e2e-tests-emptydir-pjm56 deletion completed in 6.1059997s • [SLOW TEST:10.290 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:55:54.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-41b8e275-6846-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 11:55:54.820: INFO: Waiting up to 5m0s for pod "pod-configmaps-41ba9e41-6846-11ea-b08f-0242ac11000f" in namespace "e2e-tests-configmap-xjjhz" to be "success or failure" Mar 17 11:55:54.826: INFO: Pod "pod-configmaps-41ba9e41-6846-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184656ms Mar 17 11:55:56.866: INFO: Pod "pod-configmaps-41ba9e41-6846-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045876896s Mar 17 11:55:58.870: INFO: Pod "pod-configmaps-41ba9e41-6846-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049939899s STEP: Saw pod success Mar 17 11:55:58.870: INFO: Pod "pod-configmaps-41ba9e41-6846-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:55:58.873: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-41ba9e41-6846-11ea-b08f-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 17 11:55:58.894: INFO: Waiting for pod pod-configmaps-41ba9e41-6846-11ea-b08f-0242ac11000f to disappear Mar 17 11:55:58.916: INFO: Pod pod-configmaps-41ba9e41-6846-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:55:58.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xjjhz" for this suite. Mar 17 11:56:04.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:56:04.987: INFO: namespace: e2e-tests-configmap-xjjhz, resource: bindings, ignored listing per whitelist Mar 17 11:56:05.038: INFO: namespace e2e-tests-configmap-xjjhz deletion completed in 6.109196664s • [SLOW TEST:10.347 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:56:05.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Mar 17 11:56:05.156: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:56:05.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6g9tl" for this suite. Mar 17 11:56:11.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:56:11.262: INFO: namespace: e2e-tests-kubectl-6g9tl, resource: bindings, ignored listing per whitelist Mar 17 11:56:11.326: INFO: namespace e2e-tests-kubectl-6g9tl deletion completed in 6.088179003s • [SLOW TEST:6.288 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:56:11.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:56:11.424: INFO: Creating deployment "test-recreate-deployment" Mar 17 11:56:11.439: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 17 11:56:11.447: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Mar 17 11:56:13.456: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 17 11:56:13.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720042971, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720042971, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720042971, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720042971, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 17 11:56:15.463: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 17 11:56:15.469: INFO: Updating deployment test-recreate-deployment Mar 17 11:56:15.469: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 17 11:56:15.685: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-rs24f,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rs24f/deployments/test-recreate-deployment,UID:4ba3f8d3-6846-11ea-99e8-0242ac110002,ResourceVersion:323549,Generation:2,CreationTimestamp:2020-03-17 11:56:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-03-17 11:56:15 +0000 UTC 2020-03-17 11:56:15 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-17 11:56:15 +0000 UTC 2020-03-17 11:56:11 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 17 11:56:15.745: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-rs24f,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rs24f/replicasets/test-recreate-deployment-589c4bfd,UID:4e1d9a51-6846-11ea-99e8-0242ac110002,ResourceVersion:323548,Generation:1,CreationTimestamp:2020-03-17 11:56:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4ba3f8d3-6846-11ea-99e8-0242ac110002 0xc001f3ad2f 0xc001f3ad40}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 17 11:56:15.745: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 17 11:56:15.745: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-rs24f,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rs24f/replicasets/test-recreate-deployment-5bf7f65dc,UID:4ba72b6b-6846-11ea-99e8-0242ac110002,ResourceVersion:323537,Generation:2,CreationTimestamp:2020-03-17 11:56:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4ba3f8d3-6846-11ea-99e8-0242ac110002 0xc001f3ae00 0xc001f3ae01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 17 11:56:15.749: INFO: Pod "test-recreate-deployment-589c4bfd-l5hzt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-l5hzt,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-rs24f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rs24f/pods/test-recreate-deployment-589c4bfd-l5hzt,UID:4e1f60c5-6846-11ea-99e8-0242ac110002,ResourceVersion:323550,Generation:0,CreationTimestamp:2020-03-17 11:56:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 4e1d9a51-6846-11ea-99e8-0242ac110002 0xc001f3b6bf 0xc001f3b6d0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-88md8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-88md8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-88md8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f3b740} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f3b760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:56:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:56:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:56:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 11:56:15 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-17 11:56:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:56:15.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-rs24f" for this suite. Mar 17 11:56:22.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:56:22.077: INFO: namespace: e2e-tests-deployment-rs24f, resource: bindings, ignored listing per whitelist Mar 17 11:56:22.153: INFO: namespace e2e-tests-deployment-rs24f deletion completed in 6.400045983s • [SLOW TEST:10.827 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:56:22.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Mar 17 11:56:22.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:22.522: INFO: stderr: "" Mar 17 11:56:22.522: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 17 11:56:22.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:22.661: INFO: stderr: "" Mar 17 11:56:22.661: INFO: stdout: "update-demo-nautilus-m422v update-demo-nautilus-x62fq " Mar 17 11:56:22.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m422v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:22.749: INFO: stderr: "" Mar 17 11:56:22.749: INFO: stdout: "" Mar 17 11:56:22.749: INFO: update-demo-nautilus-m422v is created but not running Mar 17 11:56:27.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:27.853: INFO: stderr: "" Mar 17 11:56:27.853: INFO: stdout: "update-demo-nautilus-m422v update-demo-nautilus-x62fq " Mar 17 11:56:27.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m422v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:27.951: INFO: stderr: "" Mar 17 11:56:27.951: INFO: stdout: "true" Mar 17 11:56:27.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m422v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:28.051: INFO: stderr: "" Mar 17 11:56:28.051: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 17 11:56:28.051: INFO: validating pod update-demo-nautilus-m422v Mar 17 11:56:28.055: INFO: got data: { "image": "nautilus.jpg" } Mar 17 11:56:28.055: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 17 11:56:28.055: INFO: update-demo-nautilus-m422v is verified up and running Mar 17 11:56:28.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x62fq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:28.157: INFO: stderr: "" Mar 17 11:56:28.157: INFO: stdout: "true" Mar 17 11:56:28.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x62fq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:28.259: INFO: stderr: "" Mar 17 11:56:28.259: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 17 11:56:28.259: INFO: validating pod update-demo-nautilus-x62fq Mar 17 11:56:28.263: INFO: got data: { "image": "nautilus.jpg" } Mar 17 11:56:28.263: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 17 11:56:28.263: INFO: update-demo-nautilus-x62fq is verified up and running STEP: scaling down the replication controller Mar 17 11:56:28.265: INFO: scanned /root for discovery docs: Mar 17 11:56:28.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:29.414: INFO: stderr: "" Mar 17 11:56:29.414: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 17 11:56:29.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:29.515: INFO: stderr: "" Mar 17 11:56:29.515: INFO: stdout: "update-demo-nautilus-m422v update-demo-nautilus-x62fq " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 17 11:56:34.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:34.621: INFO: stderr: "" Mar 17 11:56:34.621: INFO: stdout: "update-demo-nautilus-m422v update-demo-nautilus-x62fq " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 17 11:56:39.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:39.736: INFO: stderr: "" Mar 17 11:56:39.736: INFO: stdout: "update-demo-nautilus-m422v update-demo-nautilus-x62fq " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 17 11:56:44.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:44.834: INFO: stderr: "" Mar 17 11:56:44.834: INFO: stdout: "update-demo-nautilus-m422v " Mar 17 11:56:44.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m422v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:44.940: INFO: stderr: "" Mar 17 11:56:44.940: INFO: stdout: "true" Mar 17 11:56:44.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m422v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:45.048: INFO: stderr: "" Mar 17 11:56:45.048: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 17 11:56:45.048: INFO: validating pod update-demo-nautilus-m422v Mar 17 11:56:45.051: INFO: got data: { "image": "nautilus.jpg" } Mar 17 11:56:45.051: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 17 11:56:45.051: INFO: update-demo-nautilus-m422v is verified up and running STEP: scaling up the replication controller Mar 17 11:56:45.054: INFO: scanned /root for discovery docs: Mar 17 11:56:45.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:46.232: INFO: stderr: "" Mar 17 11:56:46.232: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 17 11:56:46.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:46.337: INFO: stderr: "" Mar 17 11:56:46.337: INFO: stdout: "update-demo-nautilus-m422v update-demo-nautilus-t8kg4 " Mar 17 11:56:46.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m422v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:46.448: INFO: stderr: "" Mar 17 11:56:46.448: INFO: stdout: "true" Mar 17 11:56:46.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m422v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:46.542: INFO: stderr: "" Mar 17 11:56:46.542: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 17 11:56:46.542: INFO: validating pod update-demo-nautilus-m422v Mar 17 11:56:46.545: INFO: got data: { "image": "nautilus.jpg" } Mar 17 11:56:46.545: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 17 11:56:46.545: INFO: update-demo-nautilus-m422v is verified up and running Mar 17 11:56:46.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8kg4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:46.642: INFO: stderr: "" Mar 17 11:56:46.642: INFO: stdout: "" Mar 17 11:56:46.642: INFO: update-demo-nautilus-t8kg4 is created but not running Mar 17 11:56:51.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:51.737: INFO: stderr: "" Mar 17 11:56:51.737: INFO: stdout: "update-demo-nautilus-m422v update-demo-nautilus-t8kg4 " Mar 17 11:56:51.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m422v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:51.833: INFO: stderr: "" Mar 17 11:56:51.834: INFO: stdout: "true" Mar 17 11:56:51.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m422v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:51.927: INFO: stderr: "" Mar 17 11:56:51.927: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 17 11:56:51.927: INFO: validating pod update-demo-nautilus-m422v Mar 17 11:56:51.930: INFO: got data: { "image": "nautilus.jpg" } Mar 17 11:56:51.930: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 17 11:56:51.930: INFO: update-demo-nautilus-m422v is verified up and running Mar 17 11:56:51.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8kg4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:52.021: INFO: stderr: "" Mar 17 11:56:52.021: INFO: stdout: "true" Mar 17 11:56:52.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8kg4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:52.117: INFO: stderr: "" Mar 17 11:56:52.117: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 17 11:56:52.117: INFO: validating pod update-demo-nautilus-t8kg4 Mar 17 11:56:52.121: INFO: got data: { "image": "nautilus.jpg" } Mar 17 11:56:52.121: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 17 11:56:52.121: INFO: update-demo-nautilus-t8kg4 is verified up and running STEP: using delete to clean up resources Mar 17 11:56:52.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:52.220: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 17 11:56:52.220: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 17 11:56:52.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-hdftg' Mar 17 11:56:52.322: INFO: stderr: "No resources found.\n" Mar 17 11:56:52.322: INFO: stdout: "" Mar 17 11:56:52.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-hdftg -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 17 11:56:52.424: INFO: stderr: "" Mar 17 11:56:52.424: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:56:52.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hdftg" for this suite. Mar 17 11:57:14.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:57:14.513: INFO: namespace: e2e-tests-kubectl-hdftg, resource: bindings, ignored listing per whitelist Mar 17 11:57:14.554: INFO: namespace e2e-tests-kubectl-hdftg deletion completed in 22.125750611s • [SLOW TEST:52.401 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:57:14.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-x2x6b STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-x2x6b STEP: Deleting pre-stop pod Mar 17 11:57:27.701: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:57:27.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-x2x6b" for this suite. Mar 17 11:58:05.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:58:05.773: INFO: namespace: e2e-tests-prestop-x2x6b, resource: bindings, ignored listing per whitelist Mar 17 11:58:05.840: INFO: namespace e2e-tests-prestop-x2x6b deletion completed in 38.11155226s • [SLOW TEST:51.285 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:58:05.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 17 11:58:14.014: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 17 11:58:14.021: INFO: Pod pod-with-prestop-exec-hook still exists Mar 17 11:58:16.021: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 17 11:58:16.025: INFO: Pod pod-with-prestop-exec-hook still exists Mar 17 11:58:18.021: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 17 11:58:18.026: INFO: Pod pod-with-prestop-exec-hook still exists Mar 17 11:58:20.021: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 17 11:58:20.025: INFO: Pod pod-with-prestop-exec-hook still exists Mar 17 11:58:22.021: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 17 11:58:22.025: INFO: Pod pod-with-prestop-exec-hook still exists Mar 17 11:58:24.021: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 17 11:58:24.026: INFO: Pod pod-with-prestop-exec-hook still exists Mar 17 11:58:26.021: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 17 11:58:26.025: INFO: Pod pod-with-prestop-exec-hook still exists Mar 17 11:58:28.021: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 17 11:58:28.025: INFO: Pod pod-with-prestop-exec-hook still exists Mar 17 11:58:30.021: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 17 11:58:30.025: INFO: Pod pod-with-prestop-exec-hook still exists Mar 17 11:58:32.021: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 17 11:58:32.025: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:58:32.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-b5wxj" for this suite. Mar 17 11:58:54.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:58:54.092: INFO: namespace: e2e-tests-container-lifecycle-hook-b5wxj, resource: bindings, ignored listing per whitelist Mar 17 11:58:54.151: INFO: namespace e2e-tests-container-lifecycle-hook-b5wxj deletion completed in 22.113136132s • [SLOW TEST:48.312 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:58:54.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-acafbaef-6846-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 11:58:54.262: INFO: Waiting up to 5m0s for pod "pod-configmaps-acb1c1d3-6846-11ea-b08f-0242ac11000f" in namespace "e2e-tests-configmap-md42w" to be "success or failure" Mar 17 11:58:54.267: INFO: Pod "pod-configmaps-acb1c1d3-6846-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133409ms Mar 17 11:58:56.270: INFO: Pod "pod-configmaps-acb1c1d3-6846-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007821844s Mar 17 11:58:58.274: INFO: Pod "pod-configmaps-acb1c1d3-6846-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011866221s STEP: Saw pod success Mar 17 11:58:58.274: INFO: Pod "pod-configmaps-acb1c1d3-6846-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 11:58:58.300: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-acb1c1d3-6846-11ea-b08f-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 17 11:58:58.331: INFO: Waiting for pod pod-configmaps-acb1c1d3-6846-11ea-b08f-0242ac11000f to disappear Mar 17 11:58:58.359: INFO: Pod pod-configmaps-acb1c1d3-6846-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:58:58.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-md42w" for this suite. Mar 17 11:59:04.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:59:04.442: INFO: namespace: e2e-tests-configmap-md42w, resource: bindings, ignored listing per whitelist Mar 17 11:59:04.455: INFO: namespace e2e-tests-configmap-md42w deletion completed in 6.091530859s • [SLOW TEST:10.303 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:59:04.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-b2d5d557-6846-11ea-b08f-0242ac11000f STEP: Creating secret with name s-test-opt-upd-b2d5d5c0-6846-11ea-b08f-0242ac11000f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b2d5d557-6846-11ea-b08f-0242ac11000f STEP: Updating secret s-test-opt-upd-b2d5d5c0-6846-11ea-b08f-0242ac11000f STEP: Creating secret with name s-test-opt-create-b2d5d5e9-6846-11ea-b08f-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:00:14.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-44lhx" for this suite. Mar 17 12:00:36.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:00:37.060: INFO: namespace: e2e-tests-projected-44lhx, resource: bindings, ignored listing per whitelist Mar 17 12:00:37.079: INFO: namespace e2e-tests-projected-44lhx deletion completed in 22.093757424s • [SLOW TEST:92.624 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:00:37.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 12:00:37.184: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea0a18e8-6846-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-kt4gc" to be "success or failure" Mar 17 12:00:37.187: INFO: Pod "downwardapi-volume-ea0a18e8-6846-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.490581ms Mar 17 12:00:39.205: INFO: Pod "downwardapi-volume-ea0a18e8-6846-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021688265s Mar 17 12:00:41.209: INFO: Pod "downwardapi-volume-ea0a18e8-6846-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025779976s STEP: Saw pod success Mar 17 12:00:41.209: INFO: Pod "downwardapi-volume-ea0a18e8-6846-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:00:41.213: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ea0a18e8-6846-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 12:00:41.272: INFO: Waiting for pod downwardapi-volume-ea0a18e8-6846-11ea-b08f-0242ac11000f to disappear Mar 17 12:00:41.277: INFO: Pod downwardapi-volume-ea0a18e8-6846-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:00:41.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kt4gc" for this suite. Mar 17 12:00:47.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:00:47.323: INFO: namespace: e2e-tests-projected-kt4gc, resource: bindings, ignored listing per whitelist Mar 17 12:00:47.366: INFO: namespace e2e-tests-projected-kt4gc deletion completed in 6.086428825s • [SLOW TEST:10.287 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:00:47.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Mar 17 12:00:47.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-h64q7' Mar 17 12:00:49.684: INFO: stderr: "" Mar 17 12:00:49.684: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 17 12:00:50.689: INFO: Selector matched 1 pods for map[app:redis] Mar 17 12:00:50.689: INFO: Found 0 / 1 Mar 17 12:00:51.688: INFO: Selector matched 1 pods for map[app:redis] Mar 17 12:00:51.688: INFO: Found 0 / 1 Mar 17 12:00:52.689: INFO: Selector matched 1 pods for map[app:redis] Mar 17 12:00:52.689: INFO: Found 1 / 1 Mar 17 12:00:52.689: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 17 12:00:52.692: INFO: Selector matched 1 pods for map[app:redis] Mar 17 12:00:52.692: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 17 12:00:52.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-lbphk --namespace=e2e-tests-kubectl-h64q7 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 17 12:00:52.795: INFO: stderr: "" Mar 17 12:00:52.796: INFO: stdout: "pod/redis-master-lbphk patched\n" STEP: checking annotations Mar 17 12:00:52.858: INFO: Selector matched 1 pods for map[app:redis] Mar 17 12:00:52.858: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:00:52.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-h64q7" for this suite. Mar 17 12:01:14.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:01:14.938: INFO: namespace: e2e-tests-kubectl-h64q7, resource: bindings, ignored listing per whitelist Mar 17 12:01:14.992: INFO: namespace e2e-tests-kubectl-h64q7 deletion completed in 22.13004692s • [SLOW TEST:27.625 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:01:14.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 12:01:37.111: INFO: Container started at 2020-03-17 12:01:17 +0000 UTC, pod became ready at 2020-03-17 12:01:35 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:01:37.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-7fm2c" for this suite. Mar 17 12:01:59.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:01:59.207: INFO: namespace: e2e-tests-container-probe-7fm2c, resource: bindings, ignored listing per whitelist Mar 17 12:01:59.210: INFO: namespace e2e-tests-container-probe-7fm2c deletion completed in 22.09482883s • [SLOW TEST:44.218 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:01:59.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 12:01:59.353: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:02:00.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-jhb9t" for this suite. Mar 17 12:02:06.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:02:06.451: INFO: namespace: e2e-tests-custom-resource-definition-jhb9t, resource: bindings, ignored listing per whitelist Mar 17 12:02:06.505: INFO: namespace e2e-tests-custom-resource-definition-jhb9t deletion completed in 6.09317802s • [SLOW TEST:7.295 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:02:06.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 12:02:10.759: INFO: Waiting up to 5m0s for pod "client-envvars-21ce119e-6847-11ea-b08f-0242ac11000f" in namespace "e2e-tests-pods-ln2jn" to be "success or failure" Mar 17 12:02:10.807: INFO: Pod "client-envvars-21ce119e-6847-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 47.80747ms Mar 17 12:02:12.811: INFO: Pod "client-envvars-21ce119e-6847-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051662205s Mar 17 12:02:14.815: INFO: Pod "client-envvars-21ce119e-6847-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055704063s STEP: Saw pod success Mar 17 12:02:14.815: INFO: Pod "client-envvars-21ce119e-6847-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:02:14.818: INFO: Trying to get logs from node hunter-worker pod client-envvars-21ce119e-6847-11ea-b08f-0242ac11000f container env3cont: STEP: delete the pod Mar 17 12:02:14.841: INFO: Waiting for pod client-envvars-21ce119e-6847-11ea-b08f-0242ac11000f to disappear Mar 17 12:02:14.851: INFO: Pod client-envvars-21ce119e-6847-11ea-b08f-0242ac11000f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:02:14.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-ln2jn" for this suite. Mar 17 12:02:52.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:02:52.883: INFO: namespace: e2e-tests-pods-ln2jn, resource: bindings, ignored listing per whitelist Mar 17 12:02:52.939: INFO: namespace e2e-tests-pods-ln2jn deletion completed in 38.084247066s • [SLOW TEST:46.433 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:02:52.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xn87c Mar 17 12:02:57.070: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xn87c STEP: checking the pod's current state and verifying that restartCount is present Mar 17 12:02:57.073: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:06:58.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xn87c" for this suite. Mar 17 12:07:04.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:07:04.186: INFO: namespace: e2e-tests-container-probe-xn87c, resource: bindings, ignored listing per whitelist Mar 17 12:07:04.222: INFO: namespace e2e-tests-container-probe-xn87c deletion completed in 6.088022815s • [SLOW TEST:251.283 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:07:04.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-rtxbh in namespace e2e-tests-proxy-56wpl I0317 12:07:04.377373 6 runners.go:184] Created replication controller with name: proxy-service-rtxbh, namespace: e2e-tests-proxy-56wpl, replica count: 1 I0317 12:07:05.427823 6 runners.go:184] proxy-service-rtxbh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0317 12:07:06.428019 6 runners.go:184] proxy-service-rtxbh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0317 12:07:07.428225 6 runners.go:184] proxy-service-rtxbh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0317 12:07:08.428461 6 runners.go:184] proxy-service-rtxbh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0317 12:07:09.428730 6 runners.go:184] proxy-service-rtxbh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0317 12:07:10.428930 6 runners.go:184] proxy-service-rtxbh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0317 12:07:11.429258 6 runners.go:184] proxy-service-rtxbh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0317 12:07:12.429548 6 runners.go:184] proxy-service-rtxbh Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 17 12:07:12.433: INFO: setup took 8.123576837s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 17 12:07:12.439: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-56wpl/pods/proxy-service-rtxbh-qgdcz:160/proxy/: foo (200; 6.209581ms) Mar 17 12:07:12.442: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-56wpl/pods/proxy-service-rtxbh-qgdcz:162/proxy/: bar (200; 8.811243ms) Mar 17 12:07:12.442: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-56wpl/pods/http:proxy-service-rtxbh-qgdcz:160/proxy/: foo (200; 8.835118ms) Mar 17 12:07:12.442: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-56wpl/pods/http:proxy-service-rtxbh-qgdcz:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:07:21.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-rbkd6" for this suite. Mar 17 12:07:43.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:07:43.653: INFO: namespace: e2e-tests-pods-rbkd6, resource: bindings, ignored listing per whitelist Mar 17 12:07:43.708: INFO: namespace e2e-tests-pods-rbkd6 deletion completed in 22.120192707s • [SLOW TEST:22.246 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:07:43.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 17 12:07:43.783: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 17 12:07:43.801: INFO: Waiting for terminating namespaces to be deleted... Mar 17 12:07:43.804: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 17 12:07:43.813: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Mar 17 12:07:43.813: INFO: Container kube-proxy ready: true, restart count 0 Mar 17 12:07:43.813: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 17 12:07:43.813: INFO: Container kindnet-cni ready: true, restart count 0 Mar 17 12:07:43.813: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 17 12:07:43.813: INFO: Container coredns ready: true, restart count 0 Mar 17 12:07:43.813: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 17 12:07:43.818: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 17 12:07:43.818: INFO: Container kindnet-cni ready: true, restart count 0 Mar 17 12:07:43.818: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 17 12:07:43.818: INFO: Container coredns ready: true, restart count 0 Mar 17 12:07:43.818: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 17 12:07:43.818: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-eabd469d-6847-11ea-b08f-0242ac11000f 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-eabd469d-6847-11ea-b08f-0242ac11000f off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-eabd469d-6847-11ea-b08f-0242ac11000f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:07:51.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-d6m2c" for this suite. Mar 17 12:07:59.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:08:00.084: INFO: namespace: e2e-tests-sched-pred-d6m2c, resource: bindings, ignored listing per whitelist Mar 17 12:08:00.087: INFO: namespace e2e-tests-sched-pred-d6m2c deletion completed in 8.123361762s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:16.378 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:08:00.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Mar 17 12:08:00.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 17 12:08:00.375: INFO: stderr: "" Mar 17 12:08:00.375: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:08:00.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8b4mq" for this suite. Mar 17 12:08:06.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:08:06.436: INFO: namespace: e2e-tests-kubectl-8b4mq, resource: bindings, ignored listing per whitelist Mar 17 12:08:06.463: INFO: namespace e2e-tests-kubectl-8b4mq deletion completed in 6.084217042s • [SLOW TEST:6.377 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:08:06.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:08:10.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-2g2xt" for this suite. Mar 17 12:08:16.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:08:16.730: INFO: namespace: e2e-tests-emptydir-wrapper-2g2xt, resource: bindings, ignored listing per whitelist Mar 17 12:08:16.790: INFO: namespace e2e-tests-emptydir-wrapper-2g2xt deletion completed in 6.117620991s • [SLOW TEST:10.325 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:08:16.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0317 12:08:47.414113 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 17 12:08:47.414: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:08:47.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-pqsv4" for this suite. Mar 17 12:08:53.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:08:53.469: INFO: namespace: e2e-tests-gc-pqsv4, resource: bindings, ignored listing per whitelist Mar 17 12:08:53.512: INFO: namespace e2e-tests-gc-pqsv4 deletion completed in 6.095442286s • [SLOW TEST:36.721 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:08:53.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 17 12:08:53.620: INFO: Waiting up to 5m0s for pod "pod-11ef2754-6848-11ea-b08f-0242ac11000f" in namespace "e2e-tests-emptydir-lc826" to be "success or failure" Mar 17 12:08:53.624: INFO: Pod "pod-11ef2754-6848-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.634028ms Mar 17 12:08:55.628: INFO: Pod "pod-11ef2754-6848-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007022834s Mar 17 12:08:57.632: INFO: Pod "pod-11ef2754-6848-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011092024s STEP: Saw pod success Mar 17 12:08:57.632: INFO: Pod "pod-11ef2754-6848-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:08:57.635: INFO: Trying to get logs from node hunter-worker2 pod pod-11ef2754-6848-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 12:08:57.669: INFO: Waiting for pod pod-11ef2754-6848-11ea-b08f-0242ac11000f to disappear Mar 17 12:08:57.678: INFO: Pod pod-11ef2754-6848-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:08:57.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lc826" for this suite. Mar 17 12:09:03.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:09:03.733: INFO: namespace: e2e-tests-emptydir-lc826, resource: bindings, ignored listing per whitelist Mar 17 12:09:03.809: INFO: namespace e2e-tests-emptydir-lc826 deletion completed in 6.126817225s • [SLOW TEST:10.298 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:09:03.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-18122b03-6848-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 12:09:03.943: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1817df25-6848-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-nk9xr" to be "success or failure" Mar 17 12:09:03.948: INFO: Pod "pod-projected-configmaps-1817df25-6848-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.203757ms Mar 17 12:09:05.952: INFO: Pod "pod-projected-configmaps-1817df25-6848-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00933415s Mar 17 12:09:07.960: INFO: Pod "pod-projected-configmaps-1817df25-6848-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017021505s STEP: Saw pod success Mar 17 12:09:07.960: INFO: Pod "pod-projected-configmaps-1817df25-6848-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:09:07.964: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-1817df25-6848-11ea-b08f-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 17 12:09:07.995: INFO: Waiting for pod pod-projected-configmaps-1817df25-6848-11ea-b08f-0242ac11000f to disappear Mar 17 12:09:08.002: INFO: Pod pod-projected-configmaps-1817df25-6848-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:09:08.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nk9xr" for this suite. Mar 17 12:09:14.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:09:14.047: INFO: namespace: e2e-tests-projected-nk9xr, resource: bindings, ignored listing per whitelist Mar 17 12:09:14.143: INFO: namespace e2e-tests-projected-nk9xr deletion completed in 6.138217951s • [SLOW TEST:10.333 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:09:14.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 17 12:09:22.291: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 17 12:09:22.308: INFO: Pod pod-with-prestop-http-hook still exists Mar 17 12:09:24.308: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 17 12:09:24.313: INFO: Pod pod-with-prestop-http-hook still exists Mar 17 12:09:26.308: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 17 12:09:26.313: INFO: Pod pod-with-prestop-http-hook still exists Mar 17 12:09:28.308: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 17 12:09:28.313: INFO: Pod pod-with-prestop-http-hook still exists Mar 17 12:09:30.308: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 17 12:09:30.313: INFO: Pod pod-with-prestop-http-hook still exists Mar 17 12:09:32.308: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 17 12:09:32.313: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:09:32.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-6hqwp" for this suite. Mar 17 12:09:54.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:09:54.368: INFO: namespace: e2e-tests-container-lifecycle-hook-6hqwp, resource: bindings, ignored listing per whitelist Mar 17 12:09:54.421: INFO: namespace e2e-tests-container-lifecycle-hook-6hqwp deletion completed in 22.098026745s • [SLOW TEST:40.278 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:09:54.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-363c6b78-6848-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume secrets Mar 17 12:09:54.592: INFO: Waiting up to 5m0s for pod "pod-secrets-3647e25d-6848-11ea-b08f-0242ac11000f" in namespace "e2e-tests-secrets-74qlz" to be "success or failure" Mar 17 12:09:54.608: INFO: Pod "pod-secrets-3647e25d-6848-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.608827ms Mar 17 12:09:56.626: INFO: Pod "pod-secrets-3647e25d-6848-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034464932s Mar 17 12:09:58.629: INFO: Pod "pod-secrets-3647e25d-6848-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037553655s STEP: Saw pod success Mar 17 12:09:58.629: INFO: Pod "pod-secrets-3647e25d-6848-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:09:58.632: INFO: Trying to get logs from node hunter-worker pod pod-secrets-3647e25d-6848-11ea-b08f-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 17 12:09:58.663: INFO: Waiting for pod pod-secrets-3647e25d-6848-11ea-b08f-0242ac11000f to disappear Mar 17 12:09:58.667: INFO: Pod pod-secrets-3647e25d-6848-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:09:58.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-74qlz" for this suite. Mar 17 12:10:04.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:10:04.742: INFO: namespace: e2e-tests-secrets-74qlz, resource: bindings, ignored listing per whitelist Mar 17 12:10:04.785: INFO: namespace e2e-tests-secrets-74qlz deletion completed in 6.114862351s STEP: Destroying namespace "e2e-tests-secret-namespace-9zrpp" for this suite. Mar 17 12:10:10.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:10:10.828: INFO: namespace: e2e-tests-secret-namespace-9zrpp, resource: bindings, ignored listing per whitelist Mar 17 12:10:10.876: INFO: namespace e2e-tests-secret-namespace-9zrpp deletion completed in 6.090425332s • [SLOW TEST:16.454 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:10:10.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Mar 17 12:10:11.485: INFO: Waiting up to 5m0s for pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-2879f" in namespace "e2e-tests-svcaccounts-sgvfg" to be "success or failure" Mar 17 12:10:11.488: INFO: Pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-2879f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.80514ms Mar 17 12:10:13.519: INFO: Pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-2879f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033524817s Mar 17 12:10:15.523: INFO: Pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-2879f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038001615s Mar 17 12:10:17.527: INFO: Pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-2879f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042134472s STEP: Saw pod success Mar 17 12:10:17.528: INFO: Pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-2879f" satisfied condition "success or failure" Mar 17 12:10:17.531: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-2879f container token-test: STEP: delete the pod Mar 17 12:10:17.565: INFO: Waiting for pod pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-2879f to disappear Mar 17 12:10:17.584: INFO: Pod pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-2879f no longer exists STEP: Creating a pod to test consume service account root CA Mar 17 12:10:17.587: INFO: Waiting up to 5m0s for pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-lqlzv" in namespace "e2e-tests-svcaccounts-sgvfg" to be "success or failure" Mar 17 12:10:17.589: INFO: Pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-lqlzv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.49847ms Mar 17 12:10:19.594: INFO: Pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-lqlzv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006947747s Mar 17 12:10:21.598: INFO: Pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-lqlzv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01116847s Mar 17 12:10:23.602: INFO: Pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-lqlzv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014732355s STEP: Saw pod success Mar 17 12:10:23.602: INFO: Pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-lqlzv" satisfied condition "success or failure" Mar 17 12:10:23.605: INFO: Trying to get logs from node hunter-worker pod pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-lqlzv container root-ca-test: STEP: delete the pod Mar 17 12:10:23.622: INFO: Waiting for pod pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-lqlzv to disappear Mar 17 12:10:23.626: INFO: Pod pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-lqlzv no longer exists STEP: Creating a pod to test consume service account namespace Mar 17 12:10:23.630: INFO: Waiting up to 5m0s for pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-cvrv4" in namespace "e2e-tests-svcaccounts-sgvfg" to be "success or failure" Mar 17 12:10:23.645: INFO: Pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-cvrv4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.89769ms Mar 17 12:10:25.649: INFO: Pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-cvrv4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019318163s Mar 17 12:10:27.653: INFO: Pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-cvrv4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022631741s Mar 17 12:10:29.657: INFO: Pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-cvrv4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026879165s STEP: Saw pod success Mar 17 12:10:29.657: INFO: Pod "pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-cvrv4" satisfied condition "success or failure" Mar 17 12:10:29.660: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-cvrv4 container namespace-test: STEP: delete the pod Mar 17 12:10:29.688: INFO: Waiting for pod pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-cvrv4 to disappear Mar 17 12:10:29.705: INFO: Pod pod-service-account-405a033a-6848-11ea-b08f-0242ac11000f-cvrv4 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:10:29.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-sgvfg" for this suite. Mar 17 12:10:35.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:10:35.757: INFO: namespace: e2e-tests-svcaccounts-sgvfg, resource: bindings, ignored listing per whitelist Mar 17 12:10:35.794: INFO: namespace e2e-tests-svcaccounts-sgvfg deletion completed in 6.086239628s • [SLOW TEST:24.918 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:10:35.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 17 12:10:35.928: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:10:35.930: INFO: Number of nodes with available pods: 0 Mar 17 12:10:35.930: INFO: Node hunter-worker is running more than one daemon pod Mar 17 12:10:36.934: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:10:36.937: INFO: Number of nodes with available pods: 0 Mar 17 12:10:36.937: INFO: Node hunter-worker is running more than one daemon pod Mar 17 12:10:37.935: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:10:37.938: INFO: Number of nodes with available pods: 0 Mar 17 12:10:37.938: INFO: Node hunter-worker is running more than one daemon pod Mar 17 12:10:38.934: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:10:38.938: INFO: Number of nodes with available pods: 1 Mar 17 12:10:38.938: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:10:39.935: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:10:39.939: INFO: Number of nodes with available pods: 2 Mar 17 12:10:39.939: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 17 12:10:39.978: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:10:39.999: INFO: Number of nodes with available pods: 2 Mar 17 12:10:39.999: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4ngwg, will wait for the garbage collector to delete the pods Mar 17 12:10:41.071: INFO: Deleting DaemonSet.extensions daemon-set took: 6.649839ms Mar 17 12:10:41.171: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.311736ms Mar 17 12:10:51.775: INFO: Number of nodes with available pods: 0 Mar 17 12:10:51.775: INFO: Number of running nodes: 0, number of available pods: 0 Mar 17 12:10:51.778: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4ngwg/daemonsets","resourceVersion":"326126"},"items":null} Mar 17 12:10:51.800: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4ngwg/pods","resourceVersion":"326126"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:10:51.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4ngwg" for this suite. Mar 17 12:10:57.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:10:57.869: INFO: namespace: e2e-tests-daemonsets-4ngwg, resource: bindings, ignored listing per whitelist Mar 17 12:10:57.919: INFO: namespace e2e-tests-daemonsets-4ngwg deletion completed in 6.106146197s • [SLOW TEST:22.125 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:10:57.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-5c182c4a-6848-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume secrets Mar 17 12:10:58.036: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5c18e4a9-6848-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-lhns4" to be "success or failure" Mar 17 12:10:58.052: INFO: Pod "pod-projected-secrets-5c18e4a9-6848-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.221561ms Mar 17 12:11:00.056: INFO: Pod "pod-projected-secrets-5c18e4a9-6848-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02041673s Mar 17 12:11:02.061: INFO: Pod "pod-projected-secrets-5c18e4a9-6848-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024493416s STEP: Saw pod success Mar 17 12:11:02.061: INFO: Pod "pod-projected-secrets-5c18e4a9-6848-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:11:02.064: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-5c18e4a9-6848-11ea-b08f-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 17 12:11:02.098: INFO: Waiting for pod pod-projected-secrets-5c18e4a9-6848-11ea-b08f-0242ac11000f to disappear Mar 17 12:11:02.148: INFO: Pod pod-projected-secrets-5c18e4a9-6848-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:11:02.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lhns4" for this suite. Mar 17 12:11:08.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:11:08.241: INFO: namespace: e2e-tests-projected-lhns4, resource: bindings, ignored listing per whitelist Mar 17 12:11:08.249: INFO: namespace e2e-tests-projected-lhns4 deletion completed in 6.095400188s • [SLOW TEST:10.329 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:11:08.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Mar 17 12:11:08.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sz847' Mar 17 12:11:10.403: INFO: stderr: "" Mar 17 12:11:10.403: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 17 12:11:10.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sz847' Mar 17 12:11:10.535: INFO: stderr: "" Mar 17 12:11:10.535: INFO: stdout: "update-demo-nautilus-d8475 update-demo-nautilus-r74w9 " Mar 17 12:11:10.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8475 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sz847' Mar 17 12:11:10.640: INFO: stderr: "" Mar 17 12:11:10.640: INFO: stdout: "" Mar 17 12:11:10.640: INFO: update-demo-nautilus-d8475 is created but not running Mar 17 12:11:15.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sz847' Mar 17 12:11:15.742: INFO: stderr: "" Mar 17 12:11:15.742: INFO: stdout: "update-demo-nautilus-d8475 update-demo-nautilus-r74w9 " Mar 17 12:11:15.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8475 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sz847' Mar 17 12:11:15.846: INFO: stderr: "" Mar 17 12:11:15.846: INFO: stdout: "true" Mar 17 12:11:15.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8475 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sz847' Mar 17 12:11:15.934: INFO: stderr: "" Mar 17 12:11:15.935: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 17 12:11:15.935: INFO: validating pod update-demo-nautilus-d8475 Mar 17 12:11:15.939: INFO: got data: { "image": "nautilus.jpg" } Mar 17 12:11:15.939: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 17 12:11:15.939: INFO: update-demo-nautilus-d8475 is verified up and running Mar 17 12:11:15.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r74w9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sz847' Mar 17 12:11:16.036: INFO: stderr: "" Mar 17 12:11:16.036: INFO: stdout: "true" Mar 17 12:11:16.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r74w9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sz847' Mar 17 12:11:16.134: INFO: stderr: "" Mar 17 12:11:16.134: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 17 12:11:16.134: INFO: validating pod update-demo-nautilus-r74w9 Mar 17 12:11:16.139: INFO: got data: { "image": "nautilus.jpg" } Mar 17 12:11:16.139: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 17 12:11:16.139: INFO: update-demo-nautilus-r74w9 is verified up and running STEP: rolling-update to new replication controller Mar 17 12:11:16.142: INFO: scanned /root for discovery docs: Mar 17 12:11:16.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-sz847' Mar 17 12:11:38.710: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 17 12:11:38.711: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 17 12:11:38.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sz847' Mar 17 12:11:38.807: INFO: stderr: "" Mar 17 12:11:38.807: INFO: stdout: "update-demo-kitten-h6wpb update-demo-kitten-t6g2h " Mar 17 12:11:38.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h6wpb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sz847' Mar 17 12:11:38.920: INFO: stderr: "" Mar 17 12:11:38.920: INFO: stdout: "true" Mar 17 12:11:38.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h6wpb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sz847' Mar 17 12:11:39.025: INFO: stderr: "" Mar 17 12:11:39.025: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 17 12:11:39.025: INFO: validating pod update-demo-kitten-h6wpb Mar 17 12:11:39.028: INFO: got data: { "image": "kitten.jpg" } Mar 17 12:11:39.028: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 17 12:11:39.028: INFO: update-demo-kitten-h6wpb is verified up and running Mar 17 12:11:39.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t6g2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sz847' Mar 17 12:11:39.122: INFO: stderr: "" Mar 17 12:11:39.122: INFO: stdout: "true" Mar 17 12:11:39.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t6g2h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sz847' Mar 17 12:11:39.233: INFO: stderr: "" Mar 17 12:11:39.233: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 17 12:11:39.233: INFO: validating pod update-demo-kitten-t6g2h Mar 17 12:11:39.237: INFO: got data: { "image": "kitten.jpg" } Mar 17 12:11:39.237: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 17 12:11:39.237: INFO: update-demo-kitten-t6g2h is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:11:39.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sz847" for this suite. Mar 17 12:12:03.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:12:03.268: INFO: namespace: e2e-tests-kubectl-sz847, resource: bindings, ignored listing per whitelist Mar 17 12:12:03.401: INFO: namespace e2e-tests-kubectl-sz847 deletion completed in 24.160396107s • [SLOW TEST:55.152 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:12:03.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 17 12:12:03.538: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:03.540: INFO: Number of nodes with available pods: 0 Mar 17 12:12:03.540: INFO: Node hunter-worker is running more than one daemon pod Mar 17 12:12:04.545: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:04.549: INFO: Number of nodes with available pods: 0 Mar 17 12:12:04.549: INFO: Node hunter-worker is running more than one daemon pod Mar 17 12:12:05.887: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:05.916: INFO: Number of nodes with available pods: 0 Mar 17 12:12:05.916: INFO: Node hunter-worker is running more than one daemon pod Mar 17 12:12:06.605: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:06.608: INFO: Number of nodes with available pods: 0 Mar 17 12:12:06.608: INFO: Node hunter-worker is running more than one daemon pod Mar 17 12:12:07.545: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:07.549: INFO: Number of nodes with available pods: 0 Mar 17 12:12:07.549: INFO: Node hunter-worker is running more than one daemon pod Mar 17 12:12:08.545: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:08.549: INFO: Number of nodes with available pods: 2 Mar 17 12:12:08.549: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 17 12:12:08.569: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:08.592: INFO: Number of nodes with available pods: 1 Mar 17 12:12:08.592: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:09.596: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:09.599: INFO: Number of nodes with available pods: 1 Mar 17 12:12:09.599: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:10.596: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:10.627: INFO: Number of nodes with available pods: 1 Mar 17 12:12:10.627: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:11.597: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:11.600: INFO: Number of nodes with available pods: 1 Mar 17 12:12:11.600: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:12.597: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:12.601: INFO: Number of nodes with available pods: 1 Mar 17 12:12:12.601: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:13.597: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:13.600: INFO: Number of nodes with available pods: 1 Mar 17 12:12:13.600: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:14.597: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:14.600: INFO: Number of nodes with available pods: 1 Mar 17 12:12:14.600: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:15.597: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:15.601: INFO: Number of nodes with available pods: 1 Mar 17 12:12:15.601: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:16.597: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:16.600: INFO: Number of nodes with available pods: 1 Mar 17 12:12:16.600: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:17.597: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:17.600: INFO: Number of nodes with available pods: 1 Mar 17 12:12:17.600: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:18.597: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:18.601: INFO: Number of nodes with available pods: 1 Mar 17 12:12:18.601: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:19.597: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:19.601: INFO: Number of nodes with available pods: 1 Mar 17 12:12:19.601: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:20.596: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:20.600: INFO: Number of nodes with available pods: 1 Mar 17 12:12:20.600: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:21.597: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:21.600: INFO: Number of nodes with available pods: 1 Mar 17 12:12:21.600: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:22.597: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:22.600: INFO: Number of nodes with available pods: 1 Mar 17 12:12:22.600: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:23.596: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:23.600: INFO: Number of nodes with available pods: 1 Mar 17 12:12:23.600: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:24.597: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:24.600: INFO: Number of nodes with available pods: 1 Mar 17 12:12:24.600: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:12:25.596: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:12:25.599: INFO: Number of nodes with available pods: 2 Mar 17 12:12:25.599: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-gc2gk, will wait for the garbage collector to delete the pods Mar 17 12:12:25.661: INFO: Deleting DaemonSet.extensions daemon-set took: 6.396069ms Mar 17 12:12:25.762: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.457287ms Mar 17 12:12:31.765: INFO: Number of nodes with available pods: 0 Mar 17 12:12:31.765: INFO: Number of running nodes: 0, number of available pods: 0 Mar 17 12:12:31.767: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gc2gk/daemonsets","resourceVersion":"326551"},"items":null} Mar 17 12:12:31.770: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gc2gk/pods","resourceVersion":"326551"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:12:31.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-gc2gk" for this suite. Mar 17 12:12:37.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:12:37.802: INFO: namespace: e2e-tests-daemonsets-gc2gk, resource: bindings, ignored listing per whitelist Mar 17 12:12:37.871: INFO: namespace e2e-tests-daemonsets-gc2gk deletion completed in 6.08789037s • [SLOW TEST:34.470 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:12:37.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 17 12:12:46.046: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 17 12:12:46.066: INFO: Pod pod-with-poststart-http-hook still exists Mar 17 12:12:48.066: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 17 12:12:48.070: INFO: Pod pod-with-poststart-http-hook still exists Mar 17 12:12:50.066: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 17 12:12:50.070: INFO: Pod pod-with-poststart-http-hook still exists Mar 17 12:12:52.066: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 17 12:12:52.070: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:12:52.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-q8v6t" for this suite. Mar 17 12:13:14.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:13:14.114: INFO: namespace: e2e-tests-container-lifecycle-hook-q8v6t, resource: bindings, ignored listing per whitelist Mar 17 12:13:14.168: INFO: namespace e2e-tests-container-lifecycle-hook-q8v6t deletion completed in 22.093970987s • [SLOW TEST:36.297 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:13:14.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 12:13:14.312: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 17 12:13:14.318: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:14.320: INFO: Number of nodes with available pods: 0 Mar 17 12:13:14.320: INFO: Node hunter-worker is running more than one daemon pod Mar 17 12:13:15.324: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:15.327: INFO: Number of nodes with available pods: 0 Mar 17 12:13:15.327: INFO: Node hunter-worker is running more than one daemon pod Mar 17 12:13:16.330: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:16.343: INFO: Number of nodes with available pods: 0 Mar 17 12:13:16.343: INFO: Node hunter-worker is running more than one daemon pod Mar 17 12:13:17.324: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:17.327: INFO: Number of nodes with available pods: 1 Mar 17 12:13:17.327: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:13:18.323: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:18.326: INFO: Number of nodes with available pods: 2 Mar 17 12:13:18.326: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 17 12:13:18.410: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:18.410: INFO: Wrong image for pod: daemon-set-mhpkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:18.416: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:19.484: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:19.484: INFO: Wrong image for pod: daemon-set-mhpkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:19.488: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:20.438: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:20.438: INFO: Wrong image for pod: daemon-set-mhpkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:20.441: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:21.421: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:21.421: INFO: Wrong image for pod: daemon-set-mhpkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:21.421: INFO: Pod daemon-set-mhpkd is not available Mar 17 12:13:21.424: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:22.420: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:22.420: INFO: Wrong image for pod: daemon-set-mhpkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:22.420: INFO: Pod daemon-set-mhpkd is not available Mar 17 12:13:22.424: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:23.438: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:23.438: INFO: Wrong image for pod: daemon-set-mhpkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:23.438: INFO: Pod daemon-set-mhpkd is not available Mar 17 12:13:23.441: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:24.420: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:24.420: INFO: Wrong image for pod: daemon-set-mhpkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:24.420: INFO: Pod daemon-set-mhpkd is not available Mar 17 12:13:24.424: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:25.421: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:25.421: INFO: Wrong image for pod: daemon-set-mhpkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:25.421: INFO: Pod daemon-set-mhpkd is not available Mar 17 12:13:25.425: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:26.420: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:26.420: INFO: Wrong image for pod: daemon-set-mhpkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:26.420: INFO: Pod daemon-set-mhpkd is not available Mar 17 12:13:26.423: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:27.421: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:27.421: INFO: Wrong image for pod: daemon-set-mhpkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:27.421: INFO: Pod daemon-set-mhpkd is not available Mar 17 12:13:27.425: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:28.420: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:28.421: INFO: Wrong image for pod: daemon-set-mhpkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:28.421: INFO: Pod daemon-set-mhpkd is not available Mar 17 12:13:28.425: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:29.444: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:29.444: INFO: Wrong image for pod: daemon-set-mhpkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:29.444: INFO: Pod daemon-set-mhpkd is not available Mar 17 12:13:29.447: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:30.420: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:30.420: INFO: Wrong image for pod: daemon-set-mhpkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:30.420: INFO: Pod daemon-set-mhpkd is not available Mar 17 12:13:30.424: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:31.421: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:31.421: INFO: Pod daemon-set-qfkf7 is not available Mar 17 12:13:31.425: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:32.420: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:32.420: INFO: Pod daemon-set-qfkf7 is not available Mar 17 12:13:32.423: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:33.435: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:33.435: INFO: Pod daemon-set-qfkf7 is not available Mar 17 12:13:33.458: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:34.487: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:34.492: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:35.420: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:35.424: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:36.421: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:36.421: INFO: Pod daemon-set-ks4b9 is not available Mar 17 12:13:36.425: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:37.420: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:37.420: INFO: Pod daemon-set-ks4b9 is not available Mar 17 12:13:37.424: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:38.421: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:38.421: INFO: Pod daemon-set-ks4b9 is not available Mar 17 12:13:38.425: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:39.420: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:39.420: INFO: Pod daemon-set-ks4b9 is not available Mar 17 12:13:39.423: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:40.421: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:40.421: INFO: Pod daemon-set-ks4b9 is not available Mar 17 12:13:40.426: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:41.421: INFO: Wrong image for pod: daemon-set-ks4b9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 17 12:13:41.421: INFO: Pod daemon-set-ks4b9 is not available Mar 17 12:13:41.425: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:42.421: INFO: Pod daemon-set-6xfvl is not available Mar 17 12:13:42.425: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 17 12:13:42.429: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:42.432: INFO: Number of nodes with available pods: 1 Mar 17 12:13:42.432: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:13:43.451: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:43.455: INFO: Number of nodes with available pods: 1 Mar 17 12:13:43.455: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:13:44.437: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:44.440: INFO: Number of nodes with available pods: 1 Mar 17 12:13:44.440: INFO: Node hunter-worker2 is running more than one daemon pod Mar 17 12:13:45.437: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 17 12:13:45.441: INFO: Number of nodes with available pods: 2 Mar 17 12:13:45.441: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-jr7xr, will wait for the garbage collector to delete the pods Mar 17 12:13:45.516: INFO: Deleting DaemonSet.extensions daemon-set took: 6.715182ms Mar 17 12:13:45.616: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.265697ms Mar 17 12:13:51.820: INFO: Number of nodes with available pods: 0 Mar 17 12:13:51.820: INFO: Number of running nodes: 0, number of available pods: 0 Mar 17 12:13:51.823: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-jr7xr/daemonsets","resourceVersion":"326852"},"items":null} Mar 17 12:13:51.825: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-jr7xr/pods","resourceVersion":"326852"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:13:51.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-jr7xr" for this suite. Mar 17 12:13:57.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:13:57.919: INFO: namespace: e2e-tests-daemonsets-jr7xr, resource: bindings, ignored listing per whitelist Mar 17 12:13:57.932: INFO: namespace e2e-tests-daemonsets-jr7xr deletion completed in 6.093886443s • [SLOW TEST:43.764 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:13:57.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 17 12:14:02.080: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-c766dcd0-6848-11ea-b08f-0242ac11000f,GenerateName:,Namespace:e2e-tests-events-g947r,SelfLink:/api/v1/namespaces/e2e-tests-events-g947r/pods/send-events-c766dcd0-6848-11ea-b08f-0242ac11000f,UID:c76769c4-6848-11ea-99e8-0242ac110002,ResourceVersion:326914,Generation:0,CreationTimestamp:2020-03-17 12:13:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 56269330,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gcnr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gcnr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-gcnr7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00208fee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00208ff00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 12:13:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 12:14:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 12:14:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-17 12:13:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.193,StartTime:2020-03-17 12:13:58 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-03-17 12:14:00 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://9fac6fc11bae006a90c2edff181374e293b33b545195612bbc6e0ac85d7c5d55}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Mar 17 12:14:04.085: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 17 12:14:06.090: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:14:06.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-g947r" for this suite. Mar 17 12:14:44.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:14:44.170: INFO: namespace: e2e-tests-events-g947r, resource: bindings, ignored listing per whitelist Mar 17 12:14:44.199: INFO: namespace e2e-tests-events-g947r deletion completed in 38.093910133s • [SLOW TEST:46.267 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:14:44.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-e2f9bf6a-6848-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 12:14:44.345: INFO: Waiting up to 5m0s for pod "pod-configmaps-e2fa78fc-6848-11ea-b08f-0242ac11000f" in namespace "e2e-tests-configmap-rq7qc" to be "success or failure" Mar 17 12:14:44.358: INFO: Pod "pod-configmaps-e2fa78fc-6848-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.961841ms Mar 17 12:14:46.362: INFO: Pod "pod-configmaps-e2fa78fc-6848-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016811563s Mar 17 12:14:48.365: INFO: Pod "pod-configmaps-e2fa78fc-6848-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020454067s STEP: Saw pod success Mar 17 12:14:48.365: INFO: Pod "pod-configmaps-e2fa78fc-6848-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:14:48.368: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-e2fa78fc-6848-11ea-b08f-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 17 12:14:48.389: INFO: Waiting for pod pod-configmaps-e2fa78fc-6848-11ea-b08f-0242ac11000f to disappear Mar 17 12:14:48.394: INFO: Pod pod-configmaps-e2fa78fc-6848-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:14:48.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rq7qc" for this suite. Mar 17 12:14:54.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:14:54.463: INFO: namespace: e2e-tests-configmap-rq7qc, resource: bindings, ignored listing per whitelist Mar 17 12:14:54.492: INFO: namespace e2e-tests-configmap-rq7qc deletion completed in 6.095655741s • [SLOW TEST:10.293 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:14:54.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-e91b3691-6848-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume secrets Mar 17 12:14:54.621: INFO: Waiting up to 5m0s for pod "pod-secrets-e91ce3a4-6848-11ea-b08f-0242ac11000f" in namespace "e2e-tests-secrets-nj8r4" to be "success or failure" Mar 17 12:14:54.637: INFO: Pod "pod-secrets-e91ce3a4-6848-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.862086ms Mar 17 12:14:56.640: INFO: Pod "pod-secrets-e91ce3a4-6848-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019105025s Mar 17 12:14:58.645: INFO: Pod "pod-secrets-e91ce3a4-6848-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023531996s STEP: Saw pod success Mar 17 12:14:58.645: INFO: Pod "pod-secrets-e91ce3a4-6848-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:14:58.648: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-e91ce3a4-6848-11ea-b08f-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 17 12:14:58.669: INFO: Waiting for pod pod-secrets-e91ce3a4-6848-11ea-b08f-0242ac11000f to disappear Mar 17 12:14:58.673: INFO: Pod pod-secrets-e91ce3a4-6848-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:14:58.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nj8r4" for this suite. Mar 17 12:15:04.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:15:04.704: INFO: namespace: e2e-tests-secrets-nj8r4, resource: bindings, ignored listing per whitelist Mar 17 12:15:04.770: INFO: namespace e2e-tests-secrets-nj8r4 deletion completed in 6.094083623s • [SLOW TEST:10.277 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:15:04.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 17 12:15:04.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-n4n2j' Mar 17 12:15:04.978: INFO: stderr: "" Mar 17 12:15:04.978: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 17 12:15:10.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-n4n2j -o json' Mar 17 12:15:10.126: INFO: stderr: "" Mar 17 12:15:10.126: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-17T12:15:04Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-n4n2j\",\n \"resourceVersion\": \"327118\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-n4n2j/pods/e2e-test-nginx-pod\",\n \"uid\": \"ef47fe7a-6848-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-8rrfw\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-8rrfw\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-8rrfw\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-17T12:15:05Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-17T12:15:07Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-17T12:15:07Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-17T12:15:04Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://5323b7d79a5eeb530083fd42864707bdd282b32d94b89be4f1832e8b79a679a1\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-17T12:15:06Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.195\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-17T12:15:05Z\"\n }\n}\n" STEP: replace the image in the pod Mar 17 12:15:10.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-n4n2j' Mar 17 12:15:10.371: INFO: stderr: "" Mar 17 12:15:10.372: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Mar 17 12:15:10.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-n4n2j' Mar 17 12:15:21.275: INFO: stderr: "" Mar 17 12:15:21.275: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:15:21.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-n4n2j" for this suite. Mar 17 12:15:27.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:15:27.319: INFO: namespace: e2e-tests-kubectl-n4n2j, resource: bindings, ignored listing per whitelist Mar 17 12:15:27.362: INFO: namespace e2e-tests-kubectl-n4n2j deletion completed in 6.080076235s • [SLOW TEST:22.592 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:15:27.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-fcab65e1-6848-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume secrets Mar 17 12:15:27.471: INFO: Waiting up to 5m0s for pod "pod-secrets-fcabff3a-6848-11ea-b08f-0242ac11000f" in namespace "e2e-tests-secrets-j2h2m" to be "success or failure" Mar 17 12:15:27.495: INFO: Pod "pod-secrets-fcabff3a-6848-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.346512ms Mar 17 12:15:29.535: INFO: Pod "pod-secrets-fcabff3a-6848-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063530543s Mar 17 12:15:31.539: INFO: Pod "pod-secrets-fcabff3a-6848-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067366844s STEP: Saw pod success Mar 17 12:15:31.539: INFO: Pod "pod-secrets-fcabff3a-6848-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:15:31.541: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-fcabff3a-6848-11ea-b08f-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 17 12:15:31.575: INFO: Waiting for pod pod-secrets-fcabff3a-6848-11ea-b08f-0242ac11000f to disappear Mar 17 12:15:31.614: INFO: Pod pod-secrets-fcabff3a-6848-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:15:31.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-j2h2m" for this suite. Mar 17 12:15:37.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:15:37.692: INFO: namespace: e2e-tests-secrets-j2h2m, resource: bindings, ignored listing per whitelist Mar 17 12:15:37.702: INFO: namespace e2e-tests-secrets-j2h2m deletion completed in 6.084606443s • [SLOW TEST:10.340 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:15:37.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-pf25c/secret-test-02dcfd66-6849-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume secrets Mar 17 12:15:37.832: INFO: Waiting up to 5m0s for pod "pod-configmaps-02de442a-6849-11ea-b08f-0242ac11000f" in namespace "e2e-tests-secrets-pf25c" to be "success or failure" Mar 17 12:15:37.848: INFO: Pod "pod-configmaps-02de442a-6849-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.027442ms Mar 17 12:15:39.852: INFO: Pod "pod-configmaps-02de442a-6849-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019570775s Mar 17 12:15:41.856: INFO: Pod "pod-configmaps-02de442a-6849-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023666521s STEP: Saw pod success Mar 17 12:15:41.856: INFO: Pod "pod-configmaps-02de442a-6849-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:15:41.858: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-02de442a-6849-11ea-b08f-0242ac11000f container env-test: STEP: delete the pod Mar 17 12:15:41.899: INFO: Waiting for pod pod-configmaps-02de442a-6849-11ea-b08f-0242ac11000f to disappear Mar 17 12:15:41.904: INFO: Pod pod-configmaps-02de442a-6849-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:15:41.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-pf25c" for this suite. Mar 17 12:15:47.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:15:48.006: INFO: namespace: e2e-tests-secrets-pf25c, resource: bindings, ignored listing per whitelist Mar 17 12:15:48.011: INFO: namespace e2e-tests-secrets-pf25c deletion completed in 6.103048307s • [SLOW TEST:10.308 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:15:48.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 17 12:15:48.171: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-5rzr8,SelfLink:/api/v1/namespaces/e2e-tests-watch-5rzr8/configmaps/e2e-watch-test-watch-closed,UID:09061395-6849-11ea-99e8-0242ac110002,ResourceVersion:327275,Generation:0,CreationTimestamp:2020-03-17 12:15:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 17 12:15:48.171: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-5rzr8,SelfLink:/api/v1/namespaces/e2e-tests-watch-5rzr8/configmaps/e2e-watch-test-watch-closed,UID:09061395-6849-11ea-99e8-0242ac110002,ResourceVersion:327276,Generation:0,CreationTimestamp:2020-03-17 12:15:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 17 12:15:48.182: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-5rzr8,SelfLink:/api/v1/namespaces/e2e-tests-watch-5rzr8/configmaps/e2e-watch-test-watch-closed,UID:09061395-6849-11ea-99e8-0242ac110002,ResourceVersion:327277,Generation:0,CreationTimestamp:2020-03-17 12:15:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 17 12:15:48.182: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-5rzr8,SelfLink:/api/v1/namespaces/e2e-tests-watch-5rzr8/configmaps/e2e-watch-test-watch-closed,UID:09061395-6849-11ea-99e8-0242ac110002,ResourceVersion:327278,Generation:0,CreationTimestamp:2020-03-17 12:15:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:15:48.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-5rzr8" for this suite. Mar 17 12:15:54.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:15:54.242: INFO: namespace: e2e-tests-watch-5rzr8, resource: bindings, ignored listing per whitelist Mar 17 12:15:54.305: INFO: namespace e2e-tests-watch-5rzr8 deletion completed in 6.118778135s • [SLOW TEST:6.294 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:15:54.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 12:15:54.457: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0cc1616b-6849-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-dshdt" to be "success or failure" Mar 17 12:15:54.459: INFO: Pod "downwardapi-volume-0cc1616b-6849-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250446ms Mar 17 12:15:56.463: INFO: Pod "downwardapi-volume-0cc1616b-6849-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006129433s Mar 17 12:15:58.467: INFO: Pod "downwardapi-volume-0cc1616b-6849-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01019318s STEP: Saw pod success Mar 17 12:15:58.467: INFO: Pod "downwardapi-volume-0cc1616b-6849-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:15:58.470: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0cc1616b-6849-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 12:15:58.484: INFO: Waiting for pod downwardapi-volume-0cc1616b-6849-11ea-b08f-0242ac11000f to disappear Mar 17 12:15:58.489: INFO: Pod downwardapi-volume-0cc1616b-6849-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:15:58.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dshdt" for this suite. Mar 17 12:16:04.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:16:04.580: INFO: namespace: e2e-tests-projected-dshdt, resource: bindings, ignored listing per whitelist Mar 17 12:16:04.613: INFO: namespace e2e-tests-projected-dshdt deletion completed in 6.121685138s • [SLOW TEST:10.308 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:16:04.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0317 12:16:15.388457 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 17 12:16:15.388: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:16:15.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-7lg86" for this suite. Mar 17 12:16:23.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:16:23.426: INFO: namespace: e2e-tests-gc-7lg86, resource: bindings, ignored listing per whitelist Mar 17 12:16:23.490: INFO: namespace e2e-tests-gc-7lg86 deletion completed in 8.098443984s • [SLOW TEST:18.877 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:16:23.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 17 12:16:23.637: INFO: Waiting up to 5m0s for pod "pod-1e25aba6-6849-11ea-b08f-0242ac11000f" in namespace "e2e-tests-emptydir-8w4p2" to be "success or failure" Mar 17 12:16:23.697: INFO: Pod "pod-1e25aba6-6849-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 59.847215ms Mar 17 12:16:25.701: INFO: Pod "pod-1e25aba6-6849-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06364358s Mar 17 12:16:27.705: INFO: Pod "pod-1e25aba6-6849-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067628315s STEP: Saw pod success Mar 17 12:16:27.705: INFO: Pod "pod-1e25aba6-6849-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:16:27.708: INFO: Trying to get logs from node hunter-worker pod pod-1e25aba6-6849-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 12:16:27.765: INFO: Waiting for pod pod-1e25aba6-6849-11ea-b08f-0242ac11000f to disappear Mar 17 12:16:27.771: INFO: Pod pod-1e25aba6-6849-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:16:27.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8w4p2" for this suite. Mar 17 12:16:33.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:16:33.821: INFO: namespace: e2e-tests-emptydir-8w4p2, resource: bindings, ignored listing per whitelist Mar 17 12:16:33.868: INFO: namespace e2e-tests-emptydir-8w4p2 deletion completed in 6.094196517s • [SLOW TEST:10.378 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:16:33.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:17:03.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-jzzt2" for this suite. Mar 17 12:17:09.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:17:09.620: INFO: namespace: e2e-tests-container-runtime-jzzt2, resource: bindings, ignored listing per whitelist Mar 17 12:17:09.667: INFO: namespace e2e-tests-container-runtime-jzzt2 deletion completed in 6.105196419s • [SLOW TEST:35.799 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:17:09.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 12:17:09.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39ae0f09-6849-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-xtnt6" to be "success or failure" Mar 17 12:17:09.815: INFO: Pod "downwardapi-volume-39ae0f09-6849-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.908127ms Mar 17 12:17:11.818: INFO: Pod "downwardapi-volume-39ae0f09-6849-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01686982s Mar 17 12:17:13.822: INFO: Pod "downwardapi-volume-39ae0f09-6849-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020660198s STEP: Saw pod success Mar 17 12:17:13.822: INFO: Pod "downwardapi-volume-39ae0f09-6849-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:17:13.826: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-39ae0f09-6849-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 12:17:13.875: INFO: Waiting for pod downwardapi-volume-39ae0f09-6849-11ea-b08f-0242ac11000f to disappear Mar 17 12:17:13.888: INFO: Pod downwardapi-volume-39ae0f09-6849-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:17:13.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xtnt6" for this suite. Mar 17 12:17:19.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:17:19.915: INFO: namespace: e2e-tests-projected-xtnt6, resource: bindings, ignored listing per whitelist Mar 17 12:17:19.988: INFO: namespace e2e-tests-projected-xtnt6 deletion completed in 6.095848761s • [SLOW TEST:10.321 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:17:19.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 17 12:17:24.636: INFO: Successfully updated pod "annotationupdate3fd30b38-6849-11ea-b08f-0242ac11000f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:17:26.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-spmk5" for this suite. Mar 17 12:17:48.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:17:48.756: INFO: namespace: e2e-tests-downward-api-spmk5, resource: bindings, ignored listing per whitelist Mar 17 12:17:48.781: INFO: namespace e2e-tests-downward-api-spmk5 deletion completed in 22.113448413s • [SLOW TEST:28.793 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:17:48.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-zxfgc [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Mar 17 12:17:48.961: INFO: Found 0 stateful pods, waiting for 3 Mar 17 12:17:58.971: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 17 12:17:58.971: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 17 12:17:58.972: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 17 12:17:58.997: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 17 12:18:09.033: INFO: Updating stateful set ss2 Mar 17 12:18:09.039: INFO: Waiting for Pod e2e-tests-statefulset-zxfgc/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Mar 17 12:18:19.106: INFO: Found 1 stateful pods, waiting for 3 Mar 17 12:18:29.110: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 17 12:18:29.110: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 17 12:18:29.110: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 17 12:18:29.135: INFO: Updating stateful set ss2 Mar 17 12:18:29.143: INFO: Waiting for Pod e2e-tests-statefulset-zxfgc/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 17 12:18:39.150: INFO: Waiting for Pod e2e-tests-statefulset-zxfgc/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 17 12:18:49.166: INFO: Updating stateful set ss2 Mar 17 12:18:49.191: INFO: Waiting for StatefulSet e2e-tests-statefulset-zxfgc/ss2 to complete update Mar 17 12:18:49.192: INFO: Waiting for Pod e2e-tests-statefulset-zxfgc/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 17 12:18:59.200: INFO: Deleting all statefulset in ns e2e-tests-statefulset-zxfgc Mar 17 12:18:59.203: INFO: Scaling statefulset ss2 to 0 Mar 17 12:19:39.221: INFO: Waiting for statefulset status.replicas updated to 0 Mar 17 12:19:39.223: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:19:39.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-zxfgc" for this suite. Mar 17 12:19:45.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:19:45.316: INFO: namespace: e2e-tests-statefulset-zxfgc, resource: bindings, ignored listing per whitelist Mar 17 12:19:45.329: INFO: namespace e2e-tests-statefulset-zxfgc deletion completed in 6.094232567s • [SLOW TEST:116.548 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:19:45.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 17 12:19:45.462: INFO: Waiting up to 5m0s for pod "downward-api-967207a2-6849-11ea-b08f-0242ac11000f" in namespace "e2e-tests-downward-api-6ls8j" to be "success or failure" Mar 17 12:19:45.470: INFO: Pod "downward-api-967207a2-6849-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.583739ms Mar 17 12:19:47.473: INFO: Pod "downward-api-967207a2-6849-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010898806s Mar 17 12:19:49.497: INFO: Pod "downward-api-967207a2-6849-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034199717s STEP: Saw pod success Mar 17 12:19:49.497: INFO: Pod "downward-api-967207a2-6849-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:19:49.500: INFO: Trying to get logs from node hunter-worker pod downward-api-967207a2-6849-11ea-b08f-0242ac11000f container dapi-container: STEP: delete the pod Mar 17 12:19:49.523: INFO: Waiting for pod downward-api-967207a2-6849-11ea-b08f-0242ac11000f to disappear Mar 17 12:19:49.527: INFO: Pod downward-api-967207a2-6849-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:19:49.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6ls8j" for this suite. Mar 17 12:19:55.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:19:55.582: INFO: namespace: e2e-tests-downward-api-6ls8j, resource: bindings, ignored listing per whitelist Mar 17 12:19:55.632: INFO: namespace e2e-tests-downward-api-6ls8j deletion completed in 6.101567194s • [SLOW TEST:10.302 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:19:55.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-8td8f STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 17 12:19:55.768: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 17 12:20:21.861: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.142 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-8td8f PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 12:20:21.862: INFO: >>> kubeConfig: /root/.kube/config I0317 12:20:21.895129 6 log.go:172] (0xc000302840) (0xc0017b8be0) Create stream I0317 12:20:21.895173 6 log.go:172] (0xc000302840) (0xc0017b8be0) Stream added, broadcasting: 1 I0317 12:20:21.897768 6 log.go:172] (0xc000302840) Reply frame received for 1 I0317 12:20:21.897830 6 log.go:172] (0xc000302840) (0xc0024dc000) Create stream I0317 12:20:21.897857 6 log.go:172] (0xc000302840) (0xc0024dc000) Stream added, broadcasting: 3 I0317 12:20:21.898796 6 log.go:172] (0xc000302840) Reply frame received for 3 I0317 12:20:21.898866 6 log.go:172] (0xc000302840) (0xc0024dc0a0) Create stream I0317 12:20:21.898893 6 log.go:172] (0xc000302840) (0xc0024dc0a0) Stream added, broadcasting: 5 I0317 12:20:21.899754 6 log.go:172] (0xc000302840) Reply frame received for 5 I0317 12:20:22.992686 6 log.go:172] (0xc000302840) Data frame received for 5 I0317 12:20:22.992751 6 log.go:172] (0xc0024dc0a0) (5) Data frame handling I0317 12:20:22.992796 6 log.go:172] (0xc000302840) Data frame received for 3 I0317 12:20:22.992885 6 log.go:172] (0xc0024dc000) (3) Data frame handling I0317 12:20:22.992938 6 log.go:172] (0xc0024dc000) (3) Data frame sent I0317 12:20:22.992966 6 log.go:172] (0xc000302840) Data frame received for 3 I0317 12:20:22.992995 6 log.go:172] (0xc0024dc000) (3) Data frame handling I0317 12:20:22.995379 6 log.go:172] (0xc000302840) Data frame received for 1 I0317 12:20:22.995454 6 log.go:172] (0xc0017b8be0) (1) Data frame handling I0317 12:20:22.995538 6 log.go:172] (0xc0017b8be0) (1) Data frame sent I0317 12:20:22.995579 6 log.go:172] (0xc000302840) (0xc0017b8be0) Stream removed, broadcasting: 1 I0317 12:20:22.995621 6 log.go:172] (0xc000302840) Go away received I0317 12:20:22.995770 6 log.go:172] (0xc000302840) (0xc0017b8be0) Stream removed, broadcasting: 1 I0317 12:20:22.995806 6 log.go:172] (0xc000302840) (0xc0024dc000) Stream removed, broadcasting: 3 I0317 12:20:22.995836 6 log.go:172] (0xc000302840) (0xc0024dc0a0) Stream removed, broadcasting: 5 Mar 17 12:20:22.995: INFO: Found all expected endpoints: [netserver-0] Mar 17 12:20:22.999: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.209 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-8td8f PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 17 12:20:22.999: INFO: >>> kubeConfig: /root/.kube/config I0317 12:20:23.037421 6 log.go:172] (0xc000302d10) (0xc0017b8fa0) Create stream I0317 12:20:23.037444 6 log.go:172] (0xc000302d10) (0xc0017b8fa0) Stream added, broadcasting: 1 I0317 12:20:23.043592 6 log.go:172] (0xc000302d10) Reply frame received for 1 I0317 12:20:23.043642 6 log.go:172] (0xc000302d10) (0xc0017b9040) Create stream I0317 12:20:23.043656 6 log.go:172] (0xc000302d10) (0xc0017b9040) Stream added, broadcasting: 3 I0317 12:20:23.044676 6 log.go:172] (0xc000302d10) Reply frame received for 3 I0317 12:20:23.044708 6 log.go:172] (0xc000302d10) (0xc001825180) Create stream I0317 12:20:23.044720 6 log.go:172] (0xc000302d10) (0xc001825180) Stream added, broadcasting: 5 I0317 12:20:23.045862 6 log.go:172] (0xc000302d10) Reply frame received for 5 I0317 12:20:24.129286 6 log.go:172] (0xc000302d10) Data frame received for 3 I0317 12:20:24.129324 6 log.go:172] (0xc0017b9040) (3) Data frame handling I0317 12:20:24.129356 6 log.go:172] (0xc0017b9040) (3) Data frame sent I0317 12:20:24.129371 6 log.go:172] (0xc000302d10) Data frame received for 3 I0317 12:20:24.129384 6 log.go:172] (0xc0017b9040) (3) Data frame handling I0317 12:20:24.129677 6 log.go:172] (0xc000302d10) Data frame received for 5 I0317 12:20:24.129710 6 log.go:172] (0xc001825180) (5) Data frame handling I0317 12:20:24.131502 6 log.go:172] (0xc000302d10) Data frame received for 1 I0317 12:20:24.131535 6 log.go:172] (0xc0017b8fa0) (1) Data frame handling I0317 12:20:24.131698 6 log.go:172] (0xc0017b8fa0) (1) Data frame sent I0317 12:20:24.131715 6 log.go:172] (0xc000302d10) (0xc0017b8fa0) Stream removed, broadcasting: 1 I0317 12:20:24.131736 6 log.go:172] (0xc000302d10) Go away received I0317 12:20:24.131894 6 log.go:172] (0xc000302d10) (0xc0017b8fa0) Stream removed, broadcasting: 1 I0317 12:20:24.131927 6 log.go:172] (0xc000302d10) (0xc0017b9040) Stream removed, broadcasting: 3 I0317 12:20:24.131940 6 log.go:172] (0xc000302d10) (0xc001825180) Stream removed, broadcasting: 5 Mar 17 12:20:24.131: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:20:24.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-8td8f" for this suite. Mar 17 12:20:46.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:20:46.186: INFO: namespace: e2e-tests-pod-network-test-8td8f, resource: bindings, ignored listing per whitelist Mar 17 12:20:46.223: INFO: namespace e2e-tests-pod-network-test-8td8f deletion completed in 22.087113033s • [SLOW TEST:50.591 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:20:46.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:20:50.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-hz4mt" for this suite. Mar 17 12:21:28.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:21:28.389: INFO: namespace: e2e-tests-kubelet-test-hz4mt, resource: bindings, ignored listing per whitelist Mar 17 12:21:28.441: INFO: namespace e2e-tests-kubelet-test-hz4mt deletion completed in 38.088135557s • [SLOW TEST:42.218 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:21:28.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 12:21:28.550: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Mar 17 12:21:28.555: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2g99p/daemonsets","resourceVersion":"328673"},"items":null} Mar 17 12:21:28.556: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2g99p/pods","resourceVersion":"328673"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:21:28.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-2g99p" for this suite. Mar 17 12:21:34.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:21:34.614: INFO: namespace: e2e-tests-daemonsets-2g99p, resource: bindings, ignored listing per whitelist Mar 17 12:21:34.669: INFO: namespace e2e-tests-daemonsets-2g99p deletion completed in 6.102501541s S [SKIPPING] [6.228 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 12:21:28.550: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:21:34.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Mar 17 12:21:34.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fkmqp' Mar 17 12:21:36.911: INFO: stderr: "" Mar 17 12:21:36.911: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 17 12:21:36.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fkmqp' Mar 17 12:21:37.020: INFO: stderr: "" Mar 17 12:21:37.020: INFO: stdout: "update-demo-nautilus-2hwdh update-demo-nautilus-jvnvr " Mar 17 12:21:37.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2hwdh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fkmqp' Mar 17 12:21:37.125: INFO: stderr: "" Mar 17 12:21:37.125: INFO: stdout: "" Mar 17 12:21:37.125: INFO: update-demo-nautilus-2hwdh is created but not running Mar 17 12:21:42.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fkmqp' Mar 17 12:21:42.227: INFO: stderr: "" Mar 17 12:21:42.227: INFO: stdout: "update-demo-nautilus-2hwdh update-demo-nautilus-jvnvr " Mar 17 12:21:42.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2hwdh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fkmqp' Mar 17 12:21:42.316: INFO: stderr: "" Mar 17 12:21:42.316: INFO: stdout: "true" Mar 17 12:21:42.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2hwdh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fkmqp' Mar 17 12:21:42.417: INFO: stderr: "" Mar 17 12:21:42.417: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 17 12:21:42.417: INFO: validating pod update-demo-nautilus-2hwdh Mar 17 12:21:42.421: INFO: got data: { "image": "nautilus.jpg" } Mar 17 12:21:42.421: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 17 12:21:42.421: INFO: update-demo-nautilus-2hwdh is verified up and running Mar 17 12:21:42.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvnvr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fkmqp' Mar 17 12:21:42.521: INFO: stderr: "" Mar 17 12:21:42.521: INFO: stdout: "true" Mar 17 12:21:42.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvnvr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fkmqp' Mar 17 12:21:42.616: INFO: stderr: "" Mar 17 12:21:42.616: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 17 12:21:42.616: INFO: validating pod update-demo-nautilus-jvnvr Mar 17 12:21:42.619: INFO: got data: { "image": "nautilus.jpg" } Mar 17 12:21:42.619: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 17 12:21:42.620: INFO: update-demo-nautilus-jvnvr is verified up and running STEP: using delete to clean up resources Mar 17 12:21:42.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fkmqp' Mar 17 12:21:42.719: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 17 12:21:42.719: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 17 12:21:42.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-fkmqp' Mar 17 12:21:42.818: INFO: stderr: "No resources found.\n" Mar 17 12:21:42.818: INFO: stdout: "" Mar 17 12:21:42.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-fkmqp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 17 12:21:43.024: INFO: stderr: "" Mar 17 12:21:43.025: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:21:43.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fkmqp" for this suite. Mar 17 12:22:05.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:22:05.074: INFO: namespace: e2e-tests-kubectl-fkmqp, resource: bindings, ignored listing per whitelist Mar 17 12:22:05.124: INFO: namespace e2e-tests-kubectl-fkmqp deletion completed in 22.095516384s • [SLOW TEST:30.454 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:22:05.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 12:22:05.259: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.503442ms) Mar 17 12:22:05.262: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.145812ms) Mar 17 12:22:05.265: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.850882ms) Mar 17 12:22:05.268: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.229778ms) Mar 17 12:22:05.271: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.79372ms) Mar 17 12:22:05.274: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.30034ms) Mar 17 12:22:05.277: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.306414ms) Mar 17 12:22:05.281: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.39094ms) Mar 17 12:22:05.284: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.986413ms) Mar 17 12:22:05.287: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.054919ms) Mar 17 12:22:05.290: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.207283ms) Mar 17 12:22:05.293: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.243905ms) Mar 17 12:22:05.296: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.065093ms) Mar 17 12:22:05.302: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.420072ms) Mar 17 12:22:05.307: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.54335ms) Mar 17 12:22:05.310: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.408213ms) Mar 17 12:22:05.312: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.229104ms) Mar 17 12:22:05.315: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.911266ms) Mar 17 12:22:05.317: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.213948ms) Mar 17 12:22:05.320: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.356943ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:22:05.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-rpx7l" for this suite. Mar 17 12:22:11.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:22:11.443: INFO: namespace: e2e-tests-proxy-rpx7l, resource: bindings, ignored listing per whitelist Mar 17 12:22:11.456: INFO: namespace e2e-tests-proxy-rpx7l deletion completed in 6.133458026s • [SLOW TEST:6.331 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:22:11.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Mar 17 12:22:11.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-659jh' Mar 17 12:22:11.858: INFO: stderr: "" Mar 17 12:22:11.858: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Mar 17 12:22:12.863: INFO: Selector matched 1 pods for map[app:redis] Mar 17 12:22:12.863: INFO: Found 0 / 1 Mar 17 12:22:13.874: INFO: Selector matched 1 pods for map[app:redis] Mar 17 12:22:13.874: INFO: Found 0 / 1 Mar 17 12:22:14.863: INFO: Selector matched 1 pods for map[app:redis] Mar 17 12:22:14.863: INFO: Found 1 / 1 Mar 17 12:22:14.863: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 17 12:22:14.866: INFO: Selector matched 1 pods for map[app:redis] Mar 17 12:22:14.866: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Mar 17 12:22:14.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-x4zcs redis-master --namespace=e2e-tests-kubectl-659jh' Mar 17 12:22:14.978: INFO: stderr: "" Mar 17 12:22:14.978: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 17 Mar 12:22:14.175 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Mar 12:22:14.175 # Server started, Redis version 3.2.12\n1:M 17 Mar 12:22:14.175 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Mar 12:22:14.175 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Mar 17 12:22:14.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-x4zcs redis-master --namespace=e2e-tests-kubectl-659jh --tail=1' Mar 17 12:22:15.075: INFO: stderr: "" Mar 17 12:22:15.075: INFO: stdout: "1:M 17 Mar 12:22:14.175 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Mar 17 12:22:15.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-x4zcs redis-master --namespace=e2e-tests-kubectl-659jh --limit-bytes=1' Mar 17 12:22:15.188: INFO: stderr: "" Mar 17 12:22:15.188: INFO: stdout: " " STEP: exposing timestamps Mar 17 12:22:15.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-x4zcs redis-master --namespace=e2e-tests-kubectl-659jh --tail=1 --timestamps' Mar 17 12:22:15.291: INFO: stderr: "" Mar 17 12:22:15.291: INFO: stdout: "2020-03-17T12:22:14.176216872Z 1:M 17 Mar 12:22:14.175 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Mar 17 12:22:17.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-x4zcs redis-master --namespace=e2e-tests-kubectl-659jh --since=1s' Mar 17 12:22:17.912: INFO: stderr: "" Mar 17 12:22:17.912: INFO: stdout: "" Mar 17 12:22:17.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-x4zcs redis-master --namespace=e2e-tests-kubectl-659jh --since=24h' Mar 17 12:22:18.034: INFO: stderr: "" Mar 17 12:22:18.034: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 17 Mar 12:22:14.175 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Mar 12:22:14.175 # Server started, Redis version 3.2.12\n1:M 17 Mar 12:22:14.175 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Mar 12:22:14.175 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Mar 17 12:22:18.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-659jh' Mar 17 12:22:18.141: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 17 12:22:18.142: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Mar 17 12:22:18.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-659jh' Mar 17 12:22:18.257: INFO: stderr: "No resources found.\n" Mar 17 12:22:18.257: INFO: stdout: "" Mar 17 12:22:18.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-659jh -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 17 12:22:18.347: INFO: stderr: "" Mar 17 12:22:18.347: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:22:18.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-659jh" for this suite. Mar 17 12:22:40.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:22:40.460: INFO: namespace: e2e-tests-kubectl-659jh, resource: bindings, ignored listing per whitelist Mar 17 12:22:40.517: INFO: namespace e2e-tests-kubectl-659jh deletion completed in 22.166860836s • [SLOW TEST:29.061 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:22:40.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-fedac892-6849-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume secrets Mar 17 12:22:40.650: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fedf9514-6849-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-kxj45" to be "success or failure" Mar 17 12:22:40.658: INFO: Pod "pod-projected-secrets-fedf9514-6849-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.39012ms Mar 17 12:22:42.663: INFO: Pod "pod-projected-secrets-fedf9514-6849-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012478382s Mar 17 12:22:44.666: INFO: Pod "pod-projected-secrets-fedf9514-6849-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016121935s STEP: Saw pod success Mar 17 12:22:44.666: INFO: Pod "pod-projected-secrets-fedf9514-6849-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:22:44.669: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-fedf9514-6849-11ea-b08f-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 17 12:22:44.698: INFO: Waiting for pod pod-projected-secrets-fedf9514-6849-11ea-b08f-0242ac11000f to disappear Mar 17 12:22:44.706: INFO: Pod pod-projected-secrets-fedf9514-6849-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:22:44.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kxj45" for this suite. Mar 17 12:22:50.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:22:50.752: INFO: namespace: e2e-tests-projected-kxj45, resource: bindings, ignored listing per whitelist Mar 17 12:22:50.792: INFO: namespace e2e-tests-projected-kxj45 deletion completed in 6.082709614s • [SLOW TEST:10.274 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:22:50.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 17 12:22:57.897: INFO: 8 pods remaining Mar 17 12:22:57.897: INFO: 0 pods has nil DeletionTimestamp Mar 17 12:22:57.897: INFO: Mar 17 12:22:58.619: INFO: 0 pods remaining Mar 17 12:22:58.619: INFO: 0 pods has nil DeletionTimestamp Mar 17 12:22:58.619: INFO: STEP: Gathering metrics W0317 12:22:59.538406 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 17 12:22:59.538: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:22:59.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-tvzdx" for this suite. Mar 17 12:23:06.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:23:06.158: INFO: namespace: e2e-tests-gc-tvzdx, resource: bindings, ignored listing per whitelist Mar 17 12:23:06.202: INFO: namespace e2e-tests-gc-tvzdx deletion completed in 6.366335601s • [SLOW TEST:15.410 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:23:06.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 12:23:06.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 17 12:23:06.456: INFO: stderr: "" Mar 17 12:23:06.456: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:23:06.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sjkzk" for this suite. Mar 17 12:23:12.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:23:12.532: INFO: namespace: e2e-tests-kubectl-sjkzk, resource: bindings, ignored listing per whitelist Mar 17 12:23:12.565: INFO: namespace e2e-tests-kubectl-sjkzk deletion completed in 6.105047207s • [SLOW TEST:6.362 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:23:12.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 17 12:23:12.656: INFO: PodSpec: initContainers in spec.initContainers Mar 17 12:24:02.456: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-11f81a9a-684a-11ea-b08f-0242ac11000f", GenerateName:"", Namespace:"e2e-tests-init-container-cm88p", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-cm88p/pods/pod-init-11f81a9a-684a-11ea-b08f-0242ac11000f", UID:"11fc89a3-684a-11ea-99e8-0242ac110002", ResourceVersion:"329304", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720044592, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"656257115"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-lq6b8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000511980), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lq6b8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lq6b8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lq6b8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0014d9eb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001325e60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0014d9f40)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0014d9f60)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0014d9f68), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0014d9f6c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720044592, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720044592, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720044592, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720044592, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.217", StartTime:(*v1.Time)(0xc000c82020), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0007fb6c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0007fb7a0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://bc74ca41dbd7b4c8303307256e754a0245841e8e7d034714eb66d84bef28fdb8"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000c82060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000c82040), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:24:02.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-cm88p" for this suite. Mar 17 12:24:24.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:24:24.535: INFO: namespace: e2e-tests-init-container-cm88p, resource: bindings, ignored listing per whitelist Mar 17 12:24:24.642: INFO: namespace e2e-tests-init-container-cm88p deletion completed in 22.152953661s • [SLOW TEST:72.076 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:24:24.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 17 12:24:24.771: INFO: Waiting up to 5m0s for pod "pod-3cf1195c-684a-11ea-b08f-0242ac11000f" in namespace "e2e-tests-emptydir-6gn48" to be "success or failure" Mar 17 12:24:24.781: INFO: Pod "pod-3cf1195c-684a-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.88266ms Mar 17 12:24:26.784: INFO: Pod "pod-3cf1195c-684a-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013622319s Mar 17 12:24:28.788: INFO: Pod "pod-3cf1195c-684a-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017599017s STEP: Saw pod success Mar 17 12:24:28.789: INFO: Pod "pod-3cf1195c-684a-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:24:28.791: INFO: Trying to get logs from node hunter-worker2 pod pod-3cf1195c-684a-11ea-b08f-0242ac11000f container test-container: STEP: delete the pod Mar 17 12:24:28.811: INFO: Waiting for pod pod-3cf1195c-684a-11ea-b08f-0242ac11000f to disappear Mar 17 12:24:28.821: INFO: Pod pod-3cf1195c-684a-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:24:28.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6gn48" for this suite. Mar 17 12:24:34.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:24:34.966: INFO: namespace: e2e-tests-emptydir-6gn48, resource: bindings, ignored listing per whitelist Mar 17 12:24:35.002: INFO: namespace e2e-tests-emptydir-6gn48 deletion completed in 6.17781384s • [SLOW TEST:10.360 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:24:35.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-5p2dt/configmap-test-431d364f-684a-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 12:24:35.123: INFO: Waiting up to 5m0s for pod "pod-configmaps-431e6b26-684a-11ea-b08f-0242ac11000f" in namespace "e2e-tests-configmap-5p2dt" to be "success or failure" Mar 17 12:24:35.127: INFO: Pod "pod-configmaps-431e6b26-684a-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.716525ms Mar 17 12:24:37.135: INFO: Pod "pod-configmaps-431e6b26-684a-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011807227s Mar 17 12:24:39.140: INFO: Pod "pod-configmaps-431e6b26-684a-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01645288s STEP: Saw pod success Mar 17 12:24:39.140: INFO: Pod "pod-configmaps-431e6b26-684a-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:24:39.143: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-431e6b26-684a-11ea-b08f-0242ac11000f container env-test: STEP: delete the pod Mar 17 12:24:39.158: INFO: Waiting for pod pod-configmaps-431e6b26-684a-11ea-b08f-0242ac11000f to disappear Mar 17 12:24:39.163: INFO: Pod pod-configmaps-431e6b26-684a-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:24:39.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5p2dt" for this suite. Mar 17 12:24:45.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:24:45.187: INFO: namespace: e2e-tests-configmap-5p2dt, resource: bindings, ignored listing per whitelist Mar 17 12:24:45.255: INFO: namespace e2e-tests-configmap-5p2dt deletion completed in 6.089444464s • [SLOW TEST:10.253 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:24:45.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 12:24:45.347: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4936bf49-684a-11ea-b08f-0242ac11000f" in namespace "e2e-tests-downward-api-nxbmc" to be "success or failure" Mar 17 12:24:45.387: INFO: Pod "downwardapi-volume-4936bf49-684a-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 40.120187ms Mar 17 12:24:47.391: INFO: Pod "downwardapi-volume-4936bf49-684a-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044391394s Mar 17 12:24:49.396: INFO: Pod "downwardapi-volume-4936bf49-684a-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048876155s STEP: Saw pod success Mar 17 12:24:49.396: INFO: Pod "downwardapi-volume-4936bf49-684a-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:24:49.399: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4936bf49-684a-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 12:24:49.419: INFO: Waiting for pod downwardapi-volume-4936bf49-684a-11ea-b08f-0242ac11000f to disappear Mar 17 12:24:49.422: INFO: Pod downwardapi-volume-4936bf49-684a-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:24:49.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nxbmc" for this suite. Mar 17 12:24:55.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:24:55.576: INFO: namespace: e2e-tests-downward-api-nxbmc, resource: bindings, ignored listing per whitelist Mar 17 12:24:55.588: INFO: namespace e2e-tests-downward-api-nxbmc deletion completed in 6.140704772s • [SLOW TEST:10.332 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:24:55.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Mar 17 12:24:55.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nfnrp' Mar 17 12:24:55.982: INFO: stderr: "" Mar 17 12:24:55.982: INFO: stdout: "pod/pause created\n" Mar 17 12:24:55.982: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 17 12:24:55.982: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-nfnrp" to be "running and ready" Mar 17 12:24:55.996: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 13.608611ms Mar 17 12:24:58.000: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017563368s Mar 17 12:25:00.003: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.020599947s Mar 17 12:25:00.003: INFO: Pod "pause" satisfied condition "running and ready" Mar 17 12:25:00.003: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Mar 17 12:25:00.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-nfnrp' Mar 17 12:25:00.095: INFO: stderr: "" Mar 17 12:25:00.095: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 17 12:25:00.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-nfnrp' Mar 17 12:25:00.188: INFO: stderr: "" Mar 17 12:25:00.188: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 17 12:25:00.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-nfnrp' Mar 17 12:25:00.287: INFO: stderr: "" Mar 17 12:25:00.287: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 17 12:25:00.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-nfnrp' Mar 17 12:25:00.415: INFO: stderr: "" Mar 17 12:25:00.415: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Mar 17 12:25:00.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nfnrp' Mar 17 12:25:00.564: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 17 12:25:00.564: INFO: stdout: "pod \"pause\" force deleted\n" Mar 17 12:25:00.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-nfnrp' Mar 17 12:25:00.682: INFO: stderr: "No resources found.\n" Mar 17 12:25:00.682: INFO: stdout: "" Mar 17 12:25:00.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-nfnrp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 17 12:25:00.776: INFO: stderr: "" Mar 17 12:25:00.776: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:25:00.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nfnrp" for this suite. Mar 17 12:25:06.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:25:06.831: INFO: namespace: e2e-tests-kubectl-nfnrp, resource: bindings, ignored listing per whitelist Mar 17 12:25:06.903: INFO: namespace e2e-tests-kubectl-nfnrp deletion completed in 6.123298378s • [SLOW TEST:11.315 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:25:06.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xszjm Mar 17 12:25:11.063: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xszjm STEP: checking the pod's current state and verifying that restartCount is present Mar 17 12:25:11.066: INFO: Initial restart count of pod liveness-http is 0 Mar 17 12:25:29.106: INFO: Restart count of pod e2e-tests-container-probe-xszjm/liveness-http is now 1 (18.039781891s elapsed) Mar 17 12:25:49.148: INFO: Restart count of pod e2e-tests-container-probe-xszjm/liveness-http is now 2 (38.081411614s elapsed) Mar 17 12:26:09.190: INFO: Restart count of pod e2e-tests-container-probe-xszjm/liveness-http is now 3 (58.123851976s elapsed) Mar 17 12:26:29.253: INFO: Restart count of pod e2e-tests-container-probe-xszjm/liveness-http is now 4 (1m18.187111818s elapsed) Mar 17 12:27:37.441: INFO: Restart count of pod e2e-tests-container-probe-xszjm/liveness-http is now 5 (2m26.374737795s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:27:37.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xszjm" for this suite. Mar 17 12:27:43.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:27:43.546: INFO: namespace: e2e-tests-container-probe-xszjm, resource: bindings, ignored listing per whitelist Mar 17 12:27:43.562: INFO: namespace e2e-tests-container-probe-xszjm deletion completed in 6.106185464s • [SLOW TEST:156.658 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:27:43.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-b37ed5b7-684a-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 17 12:27:43.672: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b380a05c-684a-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-k94nx" to be "success or failure" Mar 17 12:27:43.687: INFO: Pod "pod-projected-configmaps-b380a05c-684a-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.522753ms Mar 17 12:27:45.701: INFO: Pod "pod-projected-configmaps-b380a05c-684a-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029204617s Mar 17 12:27:47.706: INFO: Pod "pod-projected-configmaps-b380a05c-684a-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034140173s STEP: Saw pod success Mar 17 12:27:47.706: INFO: Pod "pod-projected-configmaps-b380a05c-684a-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:27:47.709: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-b380a05c-684a-11ea-b08f-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 17 12:27:47.725: INFO: Waiting for pod pod-projected-configmaps-b380a05c-684a-11ea-b08f-0242ac11000f to disappear Mar 17 12:27:47.730: INFO: Pod pod-projected-configmaps-b380a05c-684a-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:27:47.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k94nx" for this suite. Mar 17 12:27:53.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:27:53.863: INFO: namespace: e2e-tests-projected-k94nx, resource: bindings, ignored listing per whitelist Mar 17 12:27:53.875: INFO: namespace e2e-tests-projected-k94nx deletion completed in 6.140767722s • [SLOW TEST:10.313 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:27:53.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-b9aa2840-684a-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume secrets Mar 17 12:27:54.013: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b9aab500-684a-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-msr2k" to be "success or failure" Mar 17 12:27:54.017: INFO: Pod "pod-projected-secrets-b9aab500-684a-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.960456ms Mar 17 12:27:56.024: INFO: Pod "pod-projected-secrets-b9aab500-684a-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010478176s Mar 17 12:27:58.028: INFO: Pod "pod-projected-secrets-b9aab500-684a-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014835449s STEP: Saw pod success Mar 17 12:27:58.028: INFO: Pod "pod-projected-secrets-b9aab500-684a-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:27:58.031: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-b9aab500-684a-11ea-b08f-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 17 12:27:58.056: INFO: Waiting for pod pod-projected-secrets-b9aab500-684a-11ea-b08f-0242ac11000f to disappear Mar 17 12:27:58.060: INFO: Pod pod-projected-secrets-b9aab500-684a-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:27:58.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-msr2k" for this suite. Mar 17 12:28:04.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:28:04.136: INFO: namespace: e2e-tests-projected-msr2k, resource: bindings, ignored listing per whitelist Mar 17 12:28:04.156: INFO: namespace e2e-tests-projected-msr2k deletion completed in 6.093331633s • [SLOW TEST:10.281 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:28:04.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 17 12:28:04.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-z9xbt' Mar 17 12:28:04.371: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 17 12:28:04.371: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Mar 17 12:28:08.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-z9xbt' Mar 17 12:28:08.506: INFO: stderr: "" Mar 17 12:28:08.507: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:28:08.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-z9xbt" for this suite. Mar 17 12:28:30.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:28:30.584: INFO: namespace: e2e-tests-kubectl-z9xbt, resource: bindings, ignored listing per whitelist Mar 17 12:28:30.605: INFO: namespace e2e-tests-kubectl-z9xbt deletion completed in 22.094868626s • [SLOW TEST:26.449 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:28:30.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 12:28:30.745: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf8fc7dd-684a-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-stqwv" to be "success or failure" Mar 17 12:28:30.762: INFO: Pod "downwardapi-volume-cf8fc7dd-684a-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.973815ms Mar 17 12:28:32.766: INFO: Pod "downwardapi-volume-cf8fc7dd-684a-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020963633s Mar 17 12:28:34.770: INFO: Pod "downwardapi-volume-cf8fc7dd-684a-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02531418s STEP: Saw pod success Mar 17 12:28:34.771: INFO: Pod "downwardapi-volume-cf8fc7dd-684a-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:28:34.774: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-cf8fc7dd-684a-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 12:28:34.793: INFO: Waiting for pod downwardapi-volume-cf8fc7dd-684a-11ea-b08f-0242ac11000f to disappear Mar 17 12:28:34.797: INFO: Pod downwardapi-volume-cf8fc7dd-684a-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:28:34.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-stqwv" for this suite. Mar 17 12:28:40.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:28:40.911: INFO: namespace: e2e-tests-projected-stqwv, resource: bindings, ignored listing per whitelist Mar 17 12:28:40.936: INFO: namespace e2e-tests-projected-stqwv deletion completed in 6.135895008s • [SLOW TEST:10.331 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:28:40.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 17 12:28:41.055: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9vrm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-9vrm6/configmaps/e2e-watch-test-configmap-a,UID:d5b50a71-684a-11ea-99e8-0242ac110002,ResourceVersion:330124,Generation:0,CreationTimestamp:2020-03-17 12:28:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 17 12:28:41.056: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9vrm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-9vrm6/configmaps/e2e-watch-test-configmap-a,UID:d5b50a71-684a-11ea-99e8-0242ac110002,ResourceVersion:330124,Generation:0,CreationTimestamp:2020-03-17 12:28:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 17 12:28:51.063: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9vrm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-9vrm6/configmaps/e2e-watch-test-configmap-a,UID:d5b50a71-684a-11ea-99e8-0242ac110002,ResourceVersion:330143,Generation:0,CreationTimestamp:2020-03-17 12:28:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 17 12:28:51.063: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9vrm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-9vrm6/configmaps/e2e-watch-test-configmap-a,UID:d5b50a71-684a-11ea-99e8-0242ac110002,ResourceVersion:330143,Generation:0,CreationTimestamp:2020-03-17 12:28:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 17 12:29:01.071: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9vrm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-9vrm6/configmaps/e2e-watch-test-configmap-a,UID:d5b50a71-684a-11ea-99e8-0242ac110002,ResourceVersion:330163,Generation:0,CreationTimestamp:2020-03-17 12:28:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 17 12:29:01.071: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9vrm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-9vrm6/configmaps/e2e-watch-test-configmap-a,UID:d5b50a71-684a-11ea-99e8-0242ac110002,ResourceVersion:330163,Generation:0,CreationTimestamp:2020-03-17 12:28:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 17 12:29:11.077: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9vrm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-9vrm6/configmaps/e2e-watch-test-configmap-a,UID:d5b50a71-684a-11ea-99e8-0242ac110002,ResourceVersion:330183,Generation:0,CreationTimestamp:2020-03-17 12:28:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 17 12:29:11.077: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9vrm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-9vrm6/configmaps/e2e-watch-test-configmap-a,UID:d5b50a71-684a-11ea-99e8-0242ac110002,ResourceVersion:330183,Generation:0,CreationTimestamp:2020-03-17 12:28:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 17 12:29:21.084: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9vrm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-9vrm6/configmaps/e2e-watch-test-configmap-b,UID:ed910900-684a-11ea-99e8-0242ac110002,ResourceVersion:330203,Generation:0,CreationTimestamp:2020-03-17 12:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 17 12:29:21.084: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9vrm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-9vrm6/configmaps/e2e-watch-test-configmap-b,UID:ed910900-684a-11ea-99e8-0242ac110002,ResourceVersion:330203,Generation:0,CreationTimestamp:2020-03-17 12:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 17 12:29:31.106: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9vrm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-9vrm6/configmaps/e2e-watch-test-configmap-b,UID:ed910900-684a-11ea-99e8-0242ac110002,ResourceVersion:330223,Generation:0,CreationTimestamp:2020-03-17 12:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 17 12:29:31.106: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9vrm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-9vrm6/configmaps/e2e-watch-test-configmap-b,UID:ed910900-684a-11ea-99e8-0242ac110002,ResourceVersion:330223,Generation:0,CreationTimestamp:2020-03-17 12:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:29:41.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-9vrm6" for this suite. Mar 17 12:29:47.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:29:47.195: INFO: namespace: e2e-tests-watch-9vrm6, resource: bindings, ignored listing per whitelist Mar 17 12:29:47.206: INFO: namespace e2e-tests-watch-9vrm6 deletion completed in 6.094822352s • [SLOW TEST:66.269 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:29:47.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Mar 17 12:29:47.316: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-lgd8j" to be "success or failure" Mar 17 12:29:47.318: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.568044ms Mar 17 12:29:49.343: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027410265s Mar 17 12:29:51.347: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031575337s STEP: Saw pod success Mar 17 12:29:51.347: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 17 12:29:51.351: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 17 12:29:51.390: INFO: Waiting for pod pod-host-path-test to disappear Mar 17 12:29:51.402: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:29:51.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-lgd8j" for this suite. Mar 17 12:29:57.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:29:57.488: INFO: namespace: e2e-tests-hostpath-lgd8j, resource: bindings, ignored listing per whitelist Mar 17 12:29:57.512: INFO: namespace e2e-tests-hostpath-lgd8j deletion completed in 6.106053021s • [SLOW TEST:10.306 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:29:57.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 12:29:57.599: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0354957a-684b-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-ccqc4" to be "success or failure" Mar 17 12:29:57.617: INFO: Pod "downwardapi-volume-0354957a-684b-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.794796ms Mar 17 12:29:59.622: INFO: Pod "downwardapi-volume-0354957a-684b-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022993653s Mar 17 12:30:01.626: INFO: Pod "downwardapi-volume-0354957a-684b-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027616458s STEP: Saw pod success Mar 17 12:30:01.626: INFO: Pod "downwardapi-volume-0354957a-684b-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:30:01.629: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0354957a-684b-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 12:30:01.667: INFO: Waiting for pod downwardapi-volume-0354957a-684b-11ea-b08f-0242ac11000f to disappear Mar 17 12:30:01.679: INFO: Pod downwardapi-volume-0354957a-684b-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:30:01.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ccqc4" for this suite. Mar 17 12:30:07.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:30:07.750: INFO: namespace: e2e-tests-projected-ccqc4, resource: bindings, ignored listing per whitelist Mar 17 12:30:07.786: INFO: namespace e2e-tests-projected-ccqc4 deletion completed in 6.103889138s • [SLOW TEST:10.274 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:30:07.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 17 12:30:12.450: INFO: Successfully updated pod "labelsupdate0979b7de-684b-11ea-b08f-0242ac11000f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:30:14.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-h9wq9" for this suite. Mar 17 12:30:36.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:30:36.533: INFO: namespace: e2e-tests-downward-api-h9wq9, resource: bindings, ignored listing per whitelist Mar 17 12:30:36.574: INFO: namespace e2e-tests-downward-api-h9wq9 deletion completed in 22.09397015s • [SLOW TEST:28.787 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:30:36.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 12:30:36.682: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a9e7918-684b-11ea-b08f-0242ac11000f" in namespace "e2e-tests-downward-api-zbmlc" to be "success or failure" Mar 17 12:30:36.686: INFO: Pod "downwardapi-volume-1a9e7918-684b-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013646ms Mar 17 12:30:38.690: INFO: Pod "downwardapi-volume-1a9e7918-684b-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007428251s Mar 17 12:30:40.693: INFO: Pod "downwardapi-volume-1a9e7918-684b-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011108937s STEP: Saw pod success Mar 17 12:30:40.693: INFO: Pod "downwardapi-volume-1a9e7918-684b-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:30:40.696: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1a9e7918-684b-11ea-b08f-0242ac11000f container client-container: STEP: delete the pod Mar 17 12:30:40.807: INFO: Waiting for pod downwardapi-volume-1a9e7918-684b-11ea-b08f-0242ac11000f to disappear Mar 17 12:30:40.824: INFO: Pod downwardapi-volume-1a9e7918-684b-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:30:40.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-zbmlc" for this suite. Mar 17 12:30:46.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:30:46.900: INFO: namespace: e2e-tests-downward-api-zbmlc, resource: bindings, ignored listing per whitelist Mar 17 12:30:46.919: INFO: namespace e2e-tests-downward-api-zbmlc deletion completed in 6.091444574s • [SLOW TEST:10.345 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:30:46.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-20cae858-684b-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume secrets Mar 17 12:30:47.031: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-20cb65db-684b-11ea-b08f-0242ac11000f" in namespace "e2e-tests-projected-7jpwc" to be "success or failure" Mar 17 12:30:47.033: INFO: Pod "pod-projected-secrets-20cb65db-684b-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149085ms Mar 17 12:30:49.039: INFO: Pod "pod-projected-secrets-20cb65db-684b-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007359949s Mar 17 12:30:51.042: INFO: Pod "pod-projected-secrets-20cb65db-684b-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010700703s STEP: Saw pod success Mar 17 12:30:51.042: INFO: Pod "pod-projected-secrets-20cb65db-684b-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:30:51.044: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-20cb65db-684b-11ea-b08f-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 17 12:30:51.064: INFO: Waiting for pod pod-projected-secrets-20cb65db-684b-11ea-b08f-0242ac11000f to disappear Mar 17 12:30:51.068: INFO: Pod pod-projected-secrets-20cb65db-684b-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:30:51.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7jpwc" for this suite. Mar 17 12:30:57.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:30:57.117: INFO: namespace: e2e-tests-projected-7jpwc, resource: bindings, ignored listing per whitelist Mar 17 12:30:57.174: INFO: namespace e2e-tests-projected-7jpwc deletion completed in 6.101720574s • [SLOW TEST:10.255 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:30:57.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-26ebab38-684b-11ea-b08f-0242ac11000f STEP: Creating configMap with name cm-test-opt-upd-26ebab80-684b-11ea-b08f-0242ac11000f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-26ebab38-684b-11ea-b08f-0242ac11000f STEP: Updating configmap cm-test-opt-upd-26ebab80-684b-11ea-b08f-0242ac11000f STEP: Creating configMap with name cm-test-opt-create-26ebab9f-684b-11ea-b08f-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:32:35.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-99tnq" for this suite. Mar 17 12:32:57.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:32:57.896: INFO: namespace: e2e-tests-configmap-99tnq, resource: bindings, ignored listing per whitelist Mar 17 12:32:57.927: INFO: namespace e2e-tests-configmap-99tnq deletion completed in 22.100856554s • [SLOW TEST:120.752 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:32:57.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-68bm9 Mar 17 12:33:02.126: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-68bm9 STEP: checking the pod's current state and verifying that restartCount is present Mar 17 12:33:02.129: INFO: Initial restart count of pod liveness-http is 0 Mar 17 12:33:22.171: INFO: Restart count of pod e2e-tests-container-probe-68bm9/liveness-http is now 1 (20.042255525s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:33:22.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-68bm9" for this suite. Mar 17 12:33:28.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:33:28.309: INFO: namespace: e2e-tests-container-probe-68bm9, resource: bindings, ignored listing per whitelist Mar 17 12:33:28.338: INFO: namespace e2e-tests-container-probe-68bm9 deletion completed in 6.119507132s • [SLOW TEST:30.411 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 12:33:28.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-8101537f-684b-11ea-b08f-0242ac11000f STEP: Creating a pod to test consume secrets Mar 17 12:33:28.458: INFO: Waiting up to 5m0s for pod "pod-secrets-8102f163-684b-11ea-b08f-0242ac11000f" in namespace "e2e-tests-secrets-kl974" to be "success or failure" Mar 17 12:33:28.463: INFO: Pod "pod-secrets-8102f163-684b-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.558535ms Mar 17 12:33:30.467: INFO: Pod "pod-secrets-8102f163-684b-11ea-b08f-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008499109s Mar 17 12:33:32.471: INFO: Pod "pod-secrets-8102f163-684b-11ea-b08f-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012832758s STEP: Saw pod success Mar 17 12:33:32.471: INFO: Pod "pod-secrets-8102f163-684b-11ea-b08f-0242ac11000f" satisfied condition "success or failure" Mar 17 12:33:32.474: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-8102f163-684b-11ea-b08f-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 17 12:33:32.494: INFO: Waiting for pod pod-secrets-8102f163-684b-11ea-b08f-0242ac11000f to disappear Mar 17 12:33:32.498: INFO: Pod pod-secrets-8102f163-684b-11ea-b08f-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 12:33:32.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-kl974" for this suite. Mar 17 12:33:38.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 12:33:38.540: INFO: namespace: e2e-tests-secrets-kl974, resource: bindings, ignored listing per whitelist Mar 17 12:33:38.599: INFO: namespace e2e-tests-secrets-kl974 deletion completed in 6.098168239s • [SLOW TEST:10.261 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSMar 17 12:33:38.600: INFO: Running AfterSuite actions on all nodes Mar 17 12:33:38.600: INFO: Running AfterSuite actions on node 1 Mar 17 12:33:38.600: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6414.887 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS