I0508 21:08:32.084944 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0508 21:08:32.085266 6 e2e.go:109] Starting e2e run "19893d35-9655-4ea1-b4ec-d41cbe464f30" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588972111 - Will randomize all specs Will run 278 of 4842 specs May 8 21:08:32.146: INFO: >>> kubeConfig: /root/.kube/config May 8 21:08:32.151: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 8 21:08:32.172: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 8 21:08:32.197: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 8 21:08:32.197: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 8 21:08:32.197: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 8 21:08:32.206: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 8 21:08:32.206: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 8 21:08:32.206: INFO: e2e test version: v1.17.4 May 8 21:08:32.207: INFO: kube-apiserver version: v1.17.2 May 8 21:08:32.207: INFO: >>> kubeConfig: /root/.kube/config May 8 21:08:32.210: INFO: Cluster IP family: ipv4 SS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:08:32.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch May 8 21:08:32.255: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 8 21:08:32.284: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-703 /api/v1/namespaces/watch-703/configmaps/e2e-watch-test-resource-version 776a85b3-20fb-47ab-a2d7-6a9427211a51 14527229 0 2020-05-08 21:08:32 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 8 21:08:32.285: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-703 /api/v1/namespaces/watch-703/configmaps/e2e-watch-test-resource-version 776a85b3-20fb-47ab-a2d7-6a9427211a51 14527230 0 2020-05-08 21:08:32 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:08:32.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-703" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":1,"skipped":2,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:08:32.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:08:32.347: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3427 I0508 21:08:32.368892 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3427, replica count: 1 I0508 21:08:33.419301 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 21:08:34.419507 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 21:08:35.419745 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 21:08:36.419965 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 8 21:08:36.561: INFO: Created: latency-svc-8fz2f May 8 21:08:36.622: INFO: Got endpoints: latency-svc-8fz2f [102.249537ms] May 8 21:08:36.651: INFO: Created: latency-svc-xbq6s May 8 21:08:36.661: INFO: Got endpoints: latency-svc-xbq6s [38.506619ms] May 8 21:08:36.682: INFO: Created: latency-svc-jjflf May 8 21:08:36.691: INFO: Got endpoints: latency-svc-jjflf [68.226702ms] May 8 21:08:36.711: INFO: Created: latency-svc-llh5k May 8 21:08:36.758: INFO: Got endpoints: latency-svc-llh5k [136.299431ms] May 8 21:08:36.766: INFO: Created: latency-svc-zdzgc May 8 21:08:36.782: INFO: Got endpoints: latency-svc-zdzgc [160.16098ms] May 8 21:08:36.837: INFO: Created: latency-svc-fzhs8 May 8 21:08:36.920: INFO: Got endpoints: latency-svc-fzhs8 [297.081953ms] May 8 21:08:36.939: INFO: Created: latency-svc-p9psz May 8 21:08:36.969: INFO: Got endpoints: latency-svc-p9psz [345.940955ms] May 8 21:08:36.998: INFO: Created: latency-svc-zvrmd May 8 21:08:37.094: INFO: Got endpoints: latency-svc-zvrmd [471.057141ms] May 8 21:08:37.096: INFO: Created: latency-svc-vzrsq May 8 21:08:37.113: INFO: Got endpoints: latency-svc-vzrsq [490.584895ms] May 8 21:08:37.143: INFO: Created: latency-svc-zmhl7 May 8 21:08:37.179: INFO: Got endpoints: latency-svc-zmhl7 [556.525385ms] May 8 21:08:37.250: INFO: Created: latency-svc-8wrs8 May 8 21:08:37.263: INFO: Got endpoints: latency-svc-8wrs8 [638.350782ms] May 8 21:08:37.288: INFO: Created: latency-svc-xxx48 May 8 21:08:37.311: INFO: Got endpoints: latency-svc-xxx48 [686.181653ms] May 8 21:08:37.348: INFO: Created: latency-svc-dqlvz May 8 21:08:37.405: INFO: Got endpoints: latency-svc-dqlvz [781.922274ms] May 8 21:08:37.431: INFO: Created: latency-svc-zvk2f May 8 21:08:37.454: INFO: Got endpoints: latency-svc-zvk2f [830.753345ms] May 8 21:08:37.491: INFO: Created: latency-svc-j7kxr May 8 21:08:37.560: INFO: Got endpoints: latency-svc-j7kxr [937.967134ms] May 8 21:08:37.570: INFO: Created: latency-svc-2ddfx May 8 21:08:37.593: INFO: Got endpoints: latency-svc-2ddfx [967.225906ms] May 8 21:08:37.616: INFO: Created: latency-svc-gkxn4 May 8 21:08:37.640: INFO: Got endpoints: latency-svc-gkxn4 [979.854132ms] May 8 21:08:37.692: INFO: Created: latency-svc-86rrv May 8 21:08:37.707: INFO: Got endpoints: latency-svc-86rrv [1.016276695s] May 8 21:08:37.736: INFO: Created: latency-svc-wtfkc May 8 21:08:37.755: INFO: Got endpoints: latency-svc-wtfkc [996.468688ms] May 8 21:08:37.779: INFO: Created: latency-svc-nwnkw May 8 21:08:37.837: INFO: Got endpoints: latency-svc-nwnkw [1.054274652s] May 8 21:08:38.002: INFO: Created: latency-svc-8mjv5 May 8 21:08:38.063: INFO: Got endpoints: latency-svc-8mjv5 [1.143356167s] May 8 21:08:38.097: INFO: Created: latency-svc-pfklk May 8 21:08:38.141: INFO: Got endpoints: latency-svc-pfklk [1.17257232s] May 8 21:08:38.175: INFO: Created: latency-svc-c89l5 May 8 21:08:38.190: INFO: Got endpoints: latency-svc-c89l5 [1.09663089s] May 8 21:08:38.211: INFO: Created: latency-svc-mxvdm May 8 21:08:38.240: INFO: Got endpoints: latency-svc-mxvdm [1.127018431s] May 8 21:08:38.292: INFO: Created: latency-svc-xk4xr May 8 21:08:38.311: INFO: Got endpoints: latency-svc-xk4xr [1.131954742s] May 8 21:08:38.331: INFO: Created: latency-svc-kdpbc May 8 21:08:38.341: INFO: Got endpoints: latency-svc-kdpbc [1.078357481s] May 8 21:08:38.439: INFO: Created: latency-svc-cmz25 May 8 21:08:38.451: INFO: Got endpoints: latency-svc-cmz25 [1.139331964s] May 8 21:08:38.481: INFO: Created: latency-svc-h6pqb May 8 21:08:38.504: INFO: Got endpoints: latency-svc-h6pqb [1.098924965s] May 8 21:08:38.561: INFO: Created: latency-svc-5z576 May 8 21:08:38.570: INFO: Got endpoints: latency-svc-5z576 [1.11521327s] May 8 21:08:38.607: INFO: Created: latency-svc-bssdq May 8 21:08:38.637: INFO: Got endpoints: latency-svc-bssdq [1.076428009s] May 8 21:08:38.716: INFO: Created: latency-svc-zrr4m May 8 21:08:38.720: INFO: Got endpoints: latency-svc-zrr4m [1.126685174s] May 8 21:08:38.750: INFO: Created: latency-svc-6v5b9 May 8 21:08:38.763: INFO: Got endpoints: latency-svc-6v5b9 [1.122166905s] May 8 21:08:38.780: INFO: Created: latency-svc-hsdgd May 8 21:08:38.793: INFO: Got endpoints: latency-svc-hsdgd [1.085970762s] May 8 21:08:38.878: INFO: Created: latency-svc-7xxb2 May 8 21:08:38.881: INFO: Got endpoints: latency-svc-7xxb2 [1.126267555s] May 8 21:08:38.924: INFO: Created: latency-svc-lw74j May 8 21:08:38.956: INFO: Got endpoints: latency-svc-lw74j [1.119423415s] May 8 21:08:39.035: INFO: Created: latency-svc-799fn May 8 21:08:39.081: INFO: Got endpoints: latency-svc-799fn [1.017774547s] May 8 21:08:39.083: INFO: Created: latency-svc-bfm67 May 8 21:08:39.105: INFO: Got endpoints: latency-svc-bfm67 [963.099011ms] May 8 21:08:39.177: INFO: Created: latency-svc-n7l6c May 8 21:08:39.209: INFO: Got endpoints: latency-svc-n7l6c [1.018626561s] May 8 21:08:39.327: INFO: Created: latency-svc-p9gjq May 8 21:08:39.350: INFO: Created: latency-svc-jplth May 8 21:08:39.351: INFO: Got endpoints: latency-svc-p9gjq [1.110373082s] May 8 21:08:39.364: INFO: Got endpoints: latency-svc-jplth [1.053269165s] May 8 21:08:39.381: INFO: Created: latency-svc-p75km May 8 21:08:39.395: INFO: Got endpoints: latency-svc-p75km [1.054087635s] May 8 21:08:39.416: INFO: Created: latency-svc-hnv28 May 8 21:08:39.489: INFO: Got endpoints: latency-svc-hnv28 [1.038538427s] May 8 21:08:39.491: INFO: Created: latency-svc-xdrn8 May 8 21:08:39.497: INFO: Got endpoints: latency-svc-xdrn8 [993.712139ms] May 8 21:08:39.519: INFO: Created: latency-svc-tlt4n May 8 21:08:39.528: INFO: Got endpoints: latency-svc-tlt4n [958.073619ms] May 8 21:08:39.554: INFO: Created: latency-svc-js6qg May 8 21:08:39.564: INFO: Got endpoints: latency-svc-js6qg [927.284357ms] May 8 21:08:39.584: INFO: Created: latency-svc-67t5l May 8 21:08:39.632: INFO: Got endpoints: latency-svc-67t5l [911.81137ms] May 8 21:08:39.634: INFO: Created: latency-svc-jp7tv May 8 21:08:39.643: INFO: Got endpoints: latency-svc-jp7tv [880.138508ms] May 8 21:08:39.662: INFO: Created: latency-svc-c9l62 May 8 21:08:39.673: INFO: Got endpoints: latency-svc-c9l62 [880.155843ms] May 8 21:08:39.692: INFO: Created: latency-svc-tfdc6 May 8 21:08:39.704: INFO: Got endpoints: latency-svc-tfdc6 [822.516227ms] May 8 21:08:39.722: INFO: Created: latency-svc-qn6sq May 8 21:08:39.770: INFO: Got endpoints: latency-svc-qn6sq [813.45267ms] May 8 21:08:39.797: INFO: Created: latency-svc-62b98 May 8 21:08:39.806: INFO: Got endpoints: latency-svc-62b98 [725.067055ms] May 8 21:08:39.830: INFO: Created: latency-svc-l5492 May 8 21:08:39.843: INFO: Got endpoints: latency-svc-l5492 [738.06013ms] May 8 21:08:39.926: INFO: Created: latency-svc-mmvss May 8 21:08:39.928: INFO: Got endpoints: latency-svc-mmvss [719.475762ms] May 8 21:08:39.967: INFO: Created: latency-svc-qk6vv May 8 21:08:39.981: INFO: Got endpoints: latency-svc-qk6vv [630.267727ms] May 8 21:08:39.997: INFO: Created: latency-svc-qrv7h May 8 21:08:40.011: INFO: Got endpoints: latency-svc-qrv7h [646.486282ms] May 8 21:08:40.064: INFO: Created: latency-svc-mrl9m May 8 21:08:40.067: INFO: Got endpoints: latency-svc-mrl9m [671.307953ms] May 8 21:08:40.094: INFO: Created: latency-svc-pjn4q May 8 21:08:40.108: INFO: Got endpoints: latency-svc-pjn4q [618.789829ms] May 8 21:08:40.130: INFO: Created: latency-svc-w2tbp May 8 21:08:40.146: INFO: Got endpoints: latency-svc-w2tbp [648.305143ms] May 8 21:08:40.159: INFO: Created: latency-svc-vxcc4 May 8 21:08:40.204: INFO: Got endpoints: latency-svc-vxcc4 [676.504411ms] May 8 21:08:40.214: INFO: Created: latency-svc-47ckx May 8 21:08:40.238: INFO: Got endpoints: latency-svc-47ckx [673.25588ms] May 8 21:08:40.263: INFO: Created: latency-svc-kdr67 May 8 21:08:40.328: INFO: Got endpoints: latency-svc-kdr67 [695.939596ms] May 8 21:08:40.364: INFO: Created: latency-svc-qvb2q May 8 21:08:40.378: INFO: Got endpoints: latency-svc-qvb2q [734.933684ms] May 8 21:08:40.400: INFO: Created: latency-svc-cczcb May 8 21:08:40.408: INFO: Got endpoints: latency-svc-cczcb [734.819011ms] May 8 21:08:40.495: INFO: Created: latency-svc-m55x9 May 8 21:08:40.520: INFO: Created: latency-svc-td6sv May 8 21:08:40.520: INFO: Got endpoints: latency-svc-m55x9 [816.032881ms] May 8 21:08:40.535: INFO: Got endpoints: latency-svc-td6sv [764.680987ms] May 8 21:08:40.556: INFO: Created: latency-svc-lmhp2 May 8 21:08:40.577: INFO: Got endpoints: latency-svc-lmhp2 [770.970176ms] May 8 21:08:40.663: INFO: Created: latency-svc-6c9q7 May 8 21:08:40.687: INFO: Got endpoints: latency-svc-6c9q7 [844.622187ms] May 8 21:08:40.688: INFO: Created: latency-svc-58mk2 May 8 21:08:40.698: INFO: Got endpoints: latency-svc-58mk2 [769.054792ms] May 8 21:08:40.718: INFO: Created: latency-svc-n4jvc May 8 21:08:40.728: INFO: Got endpoints: latency-svc-n4jvc [747.279143ms] May 8 21:08:40.748: INFO: Created: latency-svc-ckrd7 May 8 21:08:40.759: INFO: Got endpoints: latency-svc-ckrd7 [747.464847ms] May 8 21:08:40.812: INFO: Created: latency-svc-vflxd May 8 21:08:40.819: INFO: Got endpoints: latency-svc-vflxd [752.127575ms] May 8 21:08:40.850: INFO: Created: latency-svc-786kg May 8 21:08:40.861: INFO: Got endpoints: latency-svc-786kg [752.894739ms] May 8 21:08:40.880: INFO: Created: latency-svc-7l56q May 8 21:08:40.891: INFO: Got endpoints: latency-svc-7l56q [745.040129ms] May 8 21:08:40.980: INFO: Created: latency-svc-frdrm May 8 21:08:40.994: INFO: Got endpoints: latency-svc-frdrm [789.366715ms] May 8 21:08:41.036: INFO: Created: latency-svc-f72tq May 8 21:08:41.048: INFO: Got endpoints: latency-svc-f72tq [810.56949ms] May 8 21:08:41.136: INFO: Created: latency-svc-mqkb8 May 8 21:08:41.139: INFO: Got endpoints: latency-svc-mqkb8 [810.920346ms] May 8 21:08:41.179: INFO: Created: latency-svc-fd98v May 8 21:08:41.192: INFO: Got endpoints: latency-svc-fd98v [814.575325ms] May 8 21:08:41.209: INFO: Created: latency-svc-nglf6 May 8 21:08:41.222: INFO: Got endpoints: latency-svc-nglf6 [814.059137ms] May 8 21:08:41.273: INFO: Created: latency-svc-kvpkj May 8 21:08:41.276: INFO: Got endpoints: latency-svc-kvpkj [756.077162ms] May 8 21:08:41.300: INFO: Created: latency-svc-mrm8g May 8 21:08:41.313: INFO: Got endpoints: latency-svc-mrm8g [778.564147ms] May 8 21:08:41.330: INFO: Created: latency-svc-vjcqs May 8 21:08:41.343: INFO: Got endpoints: latency-svc-vjcqs [766.014176ms] May 8 21:08:41.365: INFO: Created: latency-svc-8bxmk May 8 21:08:41.405: INFO: Got endpoints: latency-svc-8bxmk [717.277191ms] May 8 21:08:41.425: INFO: Created: latency-svc-rmtjs May 8 21:08:41.441: INFO: Got endpoints: latency-svc-rmtjs [743.002609ms] May 8 21:08:41.455: INFO: Created: latency-svc-t52w7 May 8 21:08:41.471: INFO: Got endpoints: latency-svc-t52w7 [742.262248ms] May 8 21:08:41.504: INFO: Created: latency-svc-jwcft May 8 21:08:41.560: INFO: Got endpoints: latency-svc-jwcft [801.572219ms] May 8 21:08:41.576: INFO: Created: latency-svc-f4gt9 May 8 21:08:41.593: INFO: Got endpoints: latency-svc-f4gt9 [773.793074ms] May 8 21:08:41.625: INFO: Created: latency-svc-9t2v4 May 8 21:08:41.647: INFO: Got endpoints: latency-svc-9t2v4 [786.150873ms] May 8 21:08:41.705: INFO: Created: latency-svc-pqf77 May 8 21:08:41.714: INFO: Got endpoints: latency-svc-pqf77 [822.775561ms] May 8 21:08:41.744: INFO: Created: latency-svc-cg56l May 8 21:08:41.762: INFO: Got endpoints: latency-svc-cg56l [767.874717ms] May 8 21:08:41.785: INFO: Created: latency-svc-4q72h May 8 21:08:41.796: INFO: Got endpoints: latency-svc-4q72h [747.580915ms] May 8 21:08:41.856: INFO: Created: latency-svc-7gxnr May 8 21:08:41.894: INFO: Got endpoints: latency-svc-7gxnr [754.616444ms] May 8 21:08:41.894: INFO: Created: latency-svc-h42s6 May 8 21:08:41.917: INFO: Got endpoints: latency-svc-h42s6 [724.075998ms] May 8 21:08:41.955: INFO: Created: latency-svc-59knl May 8 21:08:42.010: INFO: Got endpoints: latency-svc-59knl [787.64125ms] May 8 21:08:42.015: INFO: Created: latency-svc-85zbs May 8 21:08:42.037: INFO: Got endpoints: latency-svc-85zbs [761.212202ms] May 8 21:08:42.067: INFO: Created: latency-svc-wft74 May 8 21:08:42.098: INFO: Got endpoints: latency-svc-wft74 [784.701443ms] May 8 21:08:42.178: INFO: Created: latency-svc-ck2r8 May 8 21:08:42.181: INFO: Got endpoints: latency-svc-ck2r8 [837.011081ms] May 8 21:08:42.206: INFO: Created: latency-svc-5j9t2 May 8 21:08:42.242: INFO: Got endpoints: latency-svc-5j9t2 [837.549767ms] May 8 21:08:42.259: INFO: Created: latency-svc-zxb9l May 8 21:08:42.272: INFO: Got endpoints: latency-svc-zxb9l [831.179463ms] May 8 21:08:42.321: INFO: Created: latency-svc-hfwsm May 8 21:08:42.350: INFO: Got endpoints: latency-svc-hfwsm [878.960365ms] May 8 21:08:42.350: INFO: Created: latency-svc-ckz7n May 8 21:08:42.362: INFO: Got endpoints: latency-svc-ckz7n [802.264891ms] May 8 21:08:42.385: INFO: Created: latency-svc-vwzjp May 8 21:08:42.399: INFO: Got endpoints: latency-svc-vwzjp [806.158978ms] May 8 21:08:42.478: INFO: Created: latency-svc-lgwst May 8 21:08:42.481: INFO: Got endpoints: latency-svc-lgwst [834.003521ms] May 8 21:08:42.524: INFO: Created: latency-svc-nc7g5 May 8 21:08:42.570: INFO: Got endpoints: latency-svc-nc7g5 [856.698058ms] May 8 21:08:42.626: INFO: Created: latency-svc-vn5mj May 8 21:08:42.643: INFO: Got endpoints: latency-svc-vn5mj [880.781621ms] May 8 21:08:42.673: INFO: Created: latency-svc-lfs76 May 8 21:08:42.689: INFO: Got endpoints: latency-svc-lfs76 [892.974088ms] May 8 21:08:42.704: INFO: Created: latency-svc-mxsnw May 8 21:08:42.724: INFO: Got endpoints: latency-svc-mxsnw [830.693743ms] May 8 21:08:42.776: INFO: Created: latency-svc-plnt9 May 8 21:08:42.784: INFO: Got endpoints: latency-svc-plnt9 [867.627207ms] May 8 21:08:42.805: INFO: Created: latency-svc-8gk4t May 8 21:08:42.815: INFO: Got endpoints: latency-svc-8gk4t [805.045715ms] May 8 21:08:42.847: INFO: Created: latency-svc-nnsp8 May 8 21:08:42.870: INFO: Got endpoints: latency-svc-nnsp8 [832.793336ms] May 8 21:08:42.920: INFO: Created: latency-svc-z2m8w May 8 21:08:42.924: INFO: Got endpoints: latency-svc-z2m8w [825.537071ms] May 8 21:08:42.950: INFO: Created: latency-svc-7r6kn May 8 21:08:42.978: INFO: Got endpoints: latency-svc-7r6kn [797.256132ms] May 8 21:08:42.998: INFO: Created: latency-svc-7mqx5 May 8 21:08:43.105: INFO: Got endpoints: latency-svc-7mqx5 [862.914628ms] May 8 21:08:43.108: INFO: Created: latency-svc-cxwvf May 8 21:08:43.116: INFO: Got endpoints: latency-svc-cxwvf [843.909735ms] May 8 21:08:43.171: INFO: Created: latency-svc-qvg4x May 8 21:08:43.183: INFO: Got endpoints: latency-svc-qvg4x [833.05227ms] May 8 21:08:43.262: INFO: Created: latency-svc-pw85b May 8 21:08:43.265: INFO: Got endpoints: latency-svc-pw85b [902.551055ms] May 8 21:08:43.299: INFO: Created: latency-svc-p29zn May 8 21:08:43.310: INFO: Got endpoints: latency-svc-p29zn [910.877226ms] May 8 21:08:43.344: INFO: Created: latency-svc-kwj48 May 8 21:08:43.423: INFO: Got endpoints: latency-svc-kwj48 [941.289932ms] May 8 21:08:43.426: INFO: Created: latency-svc-z9dqp May 8 21:08:43.453: INFO: Got endpoints: latency-svc-z9dqp [882.500583ms] May 8 21:08:43.495: INFO: Created: latency-svc-2vtl8 May 8 21:08:43.520: INFO: Got endpoints: latency-svc-2vtl8 [877.148139ms] May 8 21:08:43.579: INFO: Created: latency-svc-vkxcw May 8 21:08:43.587: INFO: Got endpoints: latency-svc-vkxcw [897.94657ms] May 8 21:08:43.608: INFO: Created: latency-svc-zb2zn May 8 21:08:43.622: INFO: Got endpoints: latency-svc-zb2zn [897.864632ms] May 8 21:08:43.639: INFO: Created: latency-svc-ckktc May 8 21:08:43.652: INFO: Got endpoints: latency-svc-ckktc [868.10765ms] May 8 21:08:43.668: INFO: Created: latency-svc-jqv85 May 8 21:08:43.710: INFO: Got endpoints: latency-svc-jqv85 [895.057776ms] May 8 21:08:43.719: INFO: Created: latency-svc-f9wtd May 8 21:08:43.731: INFO: Got endpoints: latency-svc-f9wtd [860.948989ms] May 8 21:08:43.753: INFO: Created: latency-svc-vxkhz May 8 21:08:43.761: INFO: Got endpoints: latency-svc-vxkhz [837.769605ms] May 8 21:08:43.777: INFO: Created: latency-svc-cqnpl May 8 21:08:43.786: INFO: Got endpoints: latency-svc-cqnpl [808.017317ms] May 8 21:08:43.800: INFO: Created: latency-svc-h9ks6 May 8 21:08:43.842: INFO: Got endpoints: latency-svc-h9ks6 [736.525963ms] May 8 21:08:43.848: INFO: Created: latency-svc-kf6ds May 8 21:08:43.858: INFO: Got endpoints: latency-svc-kf6ds [742.547622ms] May 8 21:08:43.890: INFO: Created: latency-svc-z8jpg May 8 21:08:43.907: INFO: Got endpoints: latency-svc-z8jpg [723.841395ms] May 8 21:08:44.017: INFO: Created: latency-svc-6tbnc May 8 21:08:44.020: INFO: Got endpoints: latency-svc-6tbnc [754.476048ms] May 8 21:08:44.064: INFO: Created: latency-svc-25vtj May 8 21:08:44.075: INFO: Got endpoints: latency-svc-25vtj [765.020874ms] May 8 21:08:44.094: INFO: Created: latency-svc-c7kh5 May 8 21:08:44.105: INFO: Got endpoints: latency-svc-c7kh5 [682.642337ms] May 8 21:08:44.166: INFO: Created: latency-svc-lgb2s May 8 21:08:44.178: INFO: Got endpoints: latency-svc-lgb2s [725.276526ms] May 8 21:08:44.196: INFO: Created: latency-svc-pt4t7 May 8 21:08:44.208: INFO: Got endpoints: latency-svc-pt4t7 [688.006128ms] May 8 21:08:44.228: INFO: Created: latency-svc-z4vw5 May 8 21:08:44.250: INFO: Got endpoints: latency-svc-z4vw5 [663.583172ms] May 8 21:08:44.309: INFO: Created: latency-svc-b84nd May 8 21:08:44.328: INFO: Got endpoints: latency-svc-b84nd [705.13868ms] May 8 21:08:44.352: INFO: Created: latency-svc-trgv7 May 8 21:08:44.388: INFO: Got endpoints: latency-svc-trgv7 [735.542918ms] May 8 21:08:44.449: INFO: Created: latency-svc-5pmk2 May 8 21:08:44.461: INFO: Got endpoints: latency-svc-5pmk2 [750.876801ms] May 8 21:08:44.480: INFO: Created: latency-svc-9z2n9 May 8 21:08:44.509: INFO: Got endpoints: latency-svc-9z2n9 [778.136654ms] May 8 21:08:44.621: INFO: Created: latency-svc-jtmkr May 8 21:08:44.624: INFO: Got endpoints: latency-svc-jtmkr [862.390085ms] May 8 21:08:44.652: INFO: Created: latency-svc-7b78j May 8 21:08:44.668: INFO: Got endpoints: latency-svc-7b78j [882.398754ms] May 8 21:08:44.683: INFO: Created: latency-svc-xnnn4 May 8 21:08:44.696: INFO: Got endpoints: latency-svc-xnnn4 [853.968301ms] May 8 21:08:44.712: INFO: Created: latency-svc-kbd5g May 8 21:08:44.752: INFO: Got endpoints: latency-svc-kbd5g [893.100128ms] May 8 21:08:44.755: INFO: Created: latency-svc-7lc7d May 8 21:08:44.772: INFO: Got endpoints: latency-svc-7lc7d [864.961288ms] May 8 21:08:44.802: INFO: Created: latency-svc-4cf4q May 8 21:08:44.812: INFO: Got endpoints: latency-svc-4cf4q [792.678293ms] May 8 21:08:44.832: INFO: Created: latency-svc-z7wl6 May 8 21:08:44.848: INFO: Got endpoints: latency-svc-z7wl6 [772.614334ms] May 8 21:08:44.898: INFO: Created: latency-svc-dfjwq May 8 21:08:44.901: INFO: Got endpoints: latency-svc-dfjwq [795.934561ms] May 8 21:08:44.935: INFO: Created: latency-svc-zmph8 May 8 21:08:44.950: INFO: Got endpoints: latency-svc-zmph8 [771.633352ms] May 8 21:08:44.972: INFO: Created: latency-svc-wqbtg May 8 21:08:44.980: INFO: Got endpoints: latency-svc-wqbtg [772.227771ms] May 8 21:08:44.994: INFO: Created: latency-svc-tj9xz May 8 21:08:45.045: INFO: Got endpoints: latency-svc-tj9xz [794.583606ms] May 8 21:08:45.078: INFO: Created: latency-svc-rknrr May 8 21:08:45.138: INFO: Got endpoints: latency-svc-rknrr [810.278845ms] May 8 21:08:45.211: INFO: Created: latency-svc-g74f7 May 8 21:08:45.221: INFO: Got endpoints: latency-svc-g74f7 [832.619003ms] May 8 21:08:45.260: INFO: Created: latency-svc-7vtxp May 8 21:08:45.291: INFO: Got endpoints: latency-svc-7vtxp [829.397435ms] May 8 21:08:45.312: INFO: Created: latency-svc-4ww2t May 8 21:08:45.323: INFO: Got endpoints: latency-svc-4ww2t [814.027439ms] May 8 21:08:45.342: INFO: Created: latency-svc-v5l8p May 8 21:08:45.354: INFO: Got endpoints: latency-svc-v5l8p [729.720024ms] May 8 21:08:45.372: INFO: Created: latency-svc-hm4l9 May 8 21:08:45.384: INFO: Got endpoints: latency-svc-hm4l9 [715.684185ms] May 8 21:08:45.423: INFO: Created: latency-svc-cfrbh May 8 21:08:45.425: INFO: Got endpoints: latency-svc-cfrbh [729.029093ms] May 8 21:08:45.463: INFO: Created: latency-svc-682k2 May 8 21:08:45.475: INFO: Got endpoints: latency-svc-682k2 [723.201028ms] May 8 21:08:45.510: INFO: Created: latency-svc-z54vl May 8 21:08:45.554: INFO: Got endpoints: latency-svc-z54vl [782.818557ms] May 8 21:08:45.588: INFO: Created: latency-svc-mgthv May 8 21:08:45.613: INFO: Got endpoints: latency-svc-mgthv [800.917434ms] May 8 21:08:45.710: INFO: Created: latency-svc-9mq9b May 8 21:08:45.732: INFO: Got endpoints: latency-svc-9mq9b [884.62008ms] May 8 21:08:45.732: INFO: Created: latency-svc-kt7sp May 8 21:08:45.757: INFO: Got endpoints: latency-svc-kt7sp [855.611389ms] May 8 21:08:45.780: INFO: Created: latency-svc-kc8xm May 8 21:08:45.794: INFO: Got endpoints: latency-svc-kc8xm [843.993ms] May 8 21:08:45.810: INFO: Created: latency-svc-jb7fd May 8 21:08:45.847: INFO: Got endpoints: latency-svc-jb7fd [867.242085ms] May 8 21:08:45.858: INFO: Created: latency-svc-cw4zv May 8 21:08:45.873: INFO: Got endpoints: latency-svc-cw4zv [827.809231ms] May 8 21:08:45.919: INFO: Created: latency-svc-z6vh5 May 8 21:08:45.974: INFO: Got endpoints: latency-svc-z6vh5 [835.942813ms] May 8 21:08:45.978: INFO: Created: latency-svc-972bv May 8 21:08:45.993: INFO: Got endpoints: latency-svc-972bv [772.266789ms] May 8 21:08:46.015: INFO: Created: latency-svc-vsg5z May 8 21:08:46.044: INFO: Got endpoints: latency-svc-vsg5z [752.909373ms] May 8 21:08:46.135: INFO: Created: latency-svc-zv467 May 8 21:08:46.158: INFO: Got endpoints: latency-svc-zv467 [835.124081ms] May 8 21:08:46.159: INFO: Created: latency-svc-rbs5m May 8 21:08:46.168: INFO: Got endpoints: latency-svc-rbs5m [814.13861ms] May 8 21:08:46.188: INFO: Created: latency-svc-xms7n May 8 21:08:46.198: INFO: Got endpoints: latency-svc-xms7n [813.882784ms] May 8 21:08:46.224: INFO: Created: latency-svc-dlwp9 May 8 21:08:46.285: INFO: Got endpoints: latency-svc-dlwp9 [859.783401ms] May 8 21:08:46.345: INFO: Created: latency-svc-vrzb7 May 8 21:08:46.361: INFO: Got endpoints: latency-svc-vrzb7 [885.658088ms] May 8 21:08:46.380: INFO: Created: latency-svc-65dvl May 8 21:08:46.447: INFO: Got endpoints: latency-svc-65dvl [892.578296ms] May 8 21:08:46.449: INFO: Created: latency-svc-kxrf9 May 8 21:08:46.463: INFO: Got endpoints: latency-svc-kxrf9 [849.739815ms] May 8 21:08:46.488: INFO: Created: latency-svc-fmzjk May 8 21:08:46.512: INFO: Got endpoints: latency-svc-fmzjk [779.241077ms] May 8 21:08:46.536: INFO: Created: latency-svc-rtp5x May 8 21:08:46.602: INFO: Got endpoints: latency-svc-rtp5x [845.440359ms] May 8 21:08:46.626: INFO: Created: latency-svc-xz5ng May 8 21:08:46.646: INFO: Got endpoints: latency-svc-xz5ng [851.55335ms] May 8 21:08:46.668: INFO: Created: latency-svc-tfbqq May 8 21:08:46.688: INFO: Got endpoints: latency-svc-tfbqq [840.979554ms] May 8 21:08:46.740: INFO: Created: latency-svc-bj2hr May 8 21:08:46.743: INFO: Got endpoints: latency-svc-bj2hr [870.447ms] May 8 21:08:46.783: INFO: Created: latency-svc-cn8h2 May 8 21:08:46.814: INFO: Got endpoints: latency-svc-cn8h2 [839.792554ms] May 8 21:08:46.837: INFO: Created: latency-svc-lbgdk May 8 21:08:46.884: INFO: Got endpoints: latency-svc-lbgdk [890.734881ms] May 8 21:08:46.925: INFO: Created: latency-svc-96ht7 May 8 21:08:46.952: INFO: Got endpoints: latency-svc-96ht7 [908.24048ms] May 8 21:08:46.973: INFO: Created: latency-svc-vdjpc May 8 21:08:47.015: INFO: Got endpoints: latency-svc-vdjpc [856.821009ms] May 8 21:08:47.027: INFO: Created: latency-svc-sd6mz May 8 21:08:47.043: INFO: Got endpoints: latency-svc-sd6mz [875.196586ms] May 8 21:08:47.064: INFO: Created: latency-svc-dhnqn May 8 21:08:47.079: INFO: Got endpoints: latency-svc-dhnqn [881.242288ms] May 8 21:08:47.100: INFO: Created: latency-svc-q74f2 May 8 21:08:47.171: INFO: Got endpoints: latency-svc-q74f2 [886.084605ms] May 8 21:08:47.195: INFO: Created: latency-svc-4l7f9 May 8 21:08:47.218: INFO: Got endpoints: latency-svc-4l7f9 [856.724532ms] May 8 21:08:47.231: INFO: Created: latency-svc-tqwhj May 8 21:08:47.249: INFO: Got endpoints: latency-svc-tqwhj [801.642215ms] May 8 21:08:47.309: INFO: Created: latency-svc-2gvpv May 8 21:08:47.315: INFO: Got endpoints: latency-svc-2gvpv [851.726083ms] May 8 21:08:47.352: INFO: Created: latency-svc-j6cs2 May 8 21:08:47.370: INFO: Got endpoints: latency-svc-j6cs2 [858.065699ms] May 8 21:08:47.394: INFO: Created: latency-svc-nx5jf May 8 21:08:47.406: INFO: Got endpoints: latency-svc-nx5jf [803.451258ms] May 8 21:08:47.465: INFO: Created: latency-svc-4964f May 8 21:08:47.472: INFO: Got endpoints: latency-svc-4964f [826.464423ms] May 8 21:08:47.508: INFO: Created: latency-svc-h6bx8 May 8 21:08:47.521: INFO: Got endpoints: latency-svc-h6bx8 [832.313158ms] May 8 21:08:47.543: INFO: Created: latency-svc-jqg2x May 8 21:08:47.608: INFO: Got endpoints: latency-svc-jqg2x [864.715451ms] May 8 21:08:47.621: INFO: Created: latency-svc-jl2t5 May 8 21:08:47.641: INFO: Got endpoints: latency-svc-jl2t5 [827.597782ms] May 8 21:08:47.658: INFO: Created: latency-svc-r4zfj May 8 21:08:47.666: INFO: Got endpoints: latency-svc-r4zfj [781.683719ms] May 8 21:08:47.688: INFO: Created: latency-svc-6bl8f May 8 21:08:47.697: INFO: Got endpoints: latency-svc-6bl8f [744.534241ms] May 8 21:08:47.752: INFO: Created: latency-svc-9t2xw May 8 21:08:47.755: INFO: Got endpoints: latency-svc-9t2xw [739.902032ms] May 8 21:08:47.789: INFO: Created: latency-svc-65z99 May 8 21:08:47.805: INFO: Got endpoints: latency-svc-65z99 [762.409218ms] May 8 21:08:47.825: INFO: Created: latency-svc-fx5rh May 8 21:08:47.845: INFO: Got endpoints: latency-svc-fx5rh [765.952933ms] May 8 21:08:47.845: INFO: Latencies: [38.506619ms 68.226702ms 136.299431ms 160.16098ms 297.081953ms 345.940955ms 471.057141ms 490.584895ms 556.525385ms 618.789829ms 630.267727ms 638.350782ms 646.486282ms 648.305143ms 663.583172ms 671.307953ms 673.25588ms 676.504411ms 682.642337ms 686.181653ms 688.006128ms 695.939596ms 705.13868ms 715.684185ms 717.277191ms 719.475762ms 723.201028ms 723.841395ms 724.075998ms 725.067055ms 725.276526ms 729.029093ms 729.720024ms 734.819011ms 734.933684ms 735.542918ms 736.525963ms 738.06013ms 739.902032ms 742.262248ms 742.547622ms 743.002609ms 744.534241ms 745.040129ms 747.279143ms 747.464847ms 747.580915ms 750.876801ms 752.127575ms 752.894739ms 752.909373ms 754.476048ms 754.616444ms 756.077162ms 761.212202ms 762.409218ms 764.680987ms 765.020874ms 765.952933ms 766.014176ms 767.874717ms 769.054792ms 770.970176ms 771.633352ms 772.227771ms 772.266789ms 772.614334ms 773.793074ms 778.136654ms 778.564147ms 779.241077ms 781.683719ms 781.922274ms 782.818557ms 784.701443ms 786.150873ms 787.64125ms 789.366715ms 792.678293ms 794.583606ms 795.934561ms 797.256132ms 800.917434ms 801.572219ms 801.642215ms 802.264891ms 803.451258ms 805.045715ms 806.158978ms 808.017317ms 810.278845ms 810.56949ms 810.920346ms 813.45267ms 813.882784ms 814.027439ms 814.059137ms 814.13861ms 814.575325ms 816.032881ms 822.516227ms 822.775561ms 825.537071ms 826.464423ms 827.597782ms 827.809231ms 829.397435ms 830.693743ms 830.753345ms 831.179463ms 832.313158ms 832.619003ms 832.793336ms 833.05227ms 834.003521ms 835.124081ms 835.942813ms 837.011081ms 837.549767ms 837.769605ms 839.792554ms 840.979554ms 843.909735ms 843.993ms 844.622187ms 845.440359ms 849.739815ms 851.55335ms 851.726083ms 853.968301ms 855.611389ms 856.698058ms 856.724532ms 856.821009ms 858.065699ms 859.783401ms 860.948989ms 862.390085ms 862.914628ms 864.715451ms 864.961288ms 867.242085ms 867.627207ms 868.10765ms 870.447ms 875.196586ms 877.148139ms 878.960365ms 880.138508ms 880.155843ms 880.781621ms 881.242288ms 882.398754ms 882.500583ms 884.62008ms 885.658088ms 886.084605ms 890.734881ms 892.578296ms 892.974088ms 893.100128ms 895.057776ms 897.864632ms 897.94657ms 902.551055ms 908.24048ms 910.877226ms 911.81137ms 927.284357ms 937.967134ms 941.289932ms 958.073619ms 963.099011ms 967.225906ms 979.854132ms 993.712139ms 996.468688ms 1.016276695s 1.017774547s 1.018626561s 1.038538427s 1.053269165s 1.054087635s 1.054274652s 1.076428009s 1.078357481s 1.085970762s 1.09663089s 1.098924965s 1.110373082s 1.11521327s 1.119423415s 1.122166905s 1.126267555s 1.126685174s 1.127018431s 1.131954742s 1.139331964s 1.143356167s 1.17257232s] May 8 21:08:47.846: INFO: 50 %ile: 822.516227ms May 8 21:08:47.846: INFO: 90 %ile: 1.038538427s May 8 21:08:47.846: INFO: 99 %ile: 1.143356167s May 8 21:08:47.846: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:08:47.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3427" for this suite. • [SLOW TEST:15.591 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":2,"skipped":11,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:08:47.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 8 21:08:48.015: INFO: Waiting up to 5m0s for pod "pod-e826401f-5cd3-4f9c-9a72-0635fb90f7e3" in namespace "emptydir-1708" to be "success or failure" May 8 21:08:48.035: INFO: Pod "pod-e826401f-5cd3-4f9c-9a72-0635fb90f7e3": Phase="Pending", Reason="", readiness=false. Elapsed: 19.092442ms May 8 21:08:50.154: INFO: Pod "pod-e826401f-5cd3-4f9c-9a72-0635fb90f7e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138126212s May 8 21:08:52.158: INFO: Pod "pod-e826401f-5cd3-4f9c-9a72-0635fb90f7e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.142169635s STEP: Saw pod success May 8 21:08:52.158: INFO: Pod "pod-e826401f-5cd3-4f9c-9a72-0635fb90f7e3" satisfied condition "success or failure" May 8 21:08:52.161: INFO: Trying to get logs from node jerma-worker2 pod pod-e826401f-5cd3-4f9c-9a72-0635fb90f7e3 container test-container: STEP: delete the pod May 8 21:08:52.218: INFO: Waiting for pod pod-e826401f-5cd3-4f9c-9a72-0635fb90f7e3 to disappear May 8 21:08:52.238: INFO: Pod pod-e826401f-5cd3-4f9c-9a72-0635fb90f7e3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:08:52.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1708" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":34,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:08:52.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 8 21:08:52.293: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-configmap-a 528039fa-ac62-4ce1-adbc-66f0fd6fa7d5 14527933 0 2020-05-08 21:08:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 8 21:08:52.293: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-configmap-a 528039fa-ac62-4ce1-adbc-66f0fd6fa7d5 14527933 0 2020-05-08 21:08:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 8 21:09:02.346: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-configmap-a 528039fa-ac62-4ce1-adbc-66f0fd6fa7d5 14528289 0 2020-05-08 21:08:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 8 21:09:02.346: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-configmap-a 528039fa-ac62-4ce1-adbc-66f0fd6fa7d5 14528289 0 2020-05-08 21:08:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 8 21:09:12.354: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-configmap-a 528039fa-ac62-4ce1-adbc-66f0fd6fa7d5 14528614 0 2020-05-08 21:08:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 8 21:09:12.355: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-configmap-a 528039fa-ac62-4ce1-adbc-66f0fd6fa7d5 14528614 0 2020-05-08 21:08:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 8 21:09:22.362: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-configmap-a 528039fa-ac62-4ce1-adbc-66f0fd6fa7d5 14528649 0 2020-05-08 21:08:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 8 21:09:22.362: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-configmap-a 528039fa-ac62-4ce1-adbc-66f0fd6fa7d5 14528649 0 2020-05-08 21:08:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 8 21:09:32.369: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-configmap-b ab5953ad-5a91-43f7-8f58-33ffc8b3a374 14528679 0 2020-05-08 21:09:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 8 21:09:32.369: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-configmap-b ab5953ad-5a91-43f7-8f58-33ffc8b3a374 14528679 0 2020-05-08 21:09:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 8 21:09:42.376: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-configmap-b ab5953ad-5a91-43f7-8f58-33ffc8b3a374 14528709 0 2020-05-08 21:09:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 8 21:09:42.377: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-48 /api/v1/namespaces/watch-48/configmaps/e2e-watch-test-configmap-b ab5953ad-5a91-43f7-8f58-33ffc8b3a374 14528709 0 2020-05-08 21:09:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:09:52.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-48" for this suite. • [SLOW TEST:60.141 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":4,"skipped":42,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:09:52.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 8 21:09:57.037: INFO: Successfully updated pod "annotationupdate077ca3a5-9996-404b-b69d-5296310cc7b6" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:10:01.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6242" for this suite. • [SLOW TEST:8.708 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":55,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:10:01.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0508 21:10:41.944054 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 21:10:41.944: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:10:41.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8141" for this suite. • [SLOW TEST:40.856 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":6,"skipped":57,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:10:41.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 8 21:10:42.050: INFO: Waiting up to 5m0s for pod "pod-d142e0e9-cea3-46a3-a199-a65760871944" in namespace "emptydir-6477" to be "success or failure" May 8 21:10:42.072: INFO: Pod "pod-d142e0e9-cea3-46a3-a199-a65760871944": Phase="Pending", Reason="", readiness=false. Elapsed: 22.06485ms May 8 21:10:44.076: INFO: Pod "pod-d142e0e9-cea3-46a3-a199-a65760871944": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025874858s May 8 21:10:46.084: INFO: Pod "pod-d142e0e9-cea3-46a3-a199-a65760871944": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033992668s STEP: Saw pod success May 8 21:10:46.084: INFO: Pod "pod-d142e0e9-cea3-46a3-a199-a65760871944" satisfied condition "success or failure" May 8 21:10:46.087: INFO: Trying to get logs from node jerma-worker pod pod-d142e0e9-cea3-46a3-a199-a65760871944 container test-container: STEP: delete the pod May 8 21:10:46.146: INFO: Waiting for pod pod-d142e0e9-cea3-46a3-a199-a65760871944 to disappear May 8 21:10:46.149: INFO: Pod pod-d142e0e9-cea3-46a3-a199-a65760871944 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:10:46.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6477" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":57,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:10:46.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 8 21:10:46.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 8 21:10:49.907: INFO: stderr: "" May 8 21:10:49.907: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:10:49.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7517" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":8,"skipped":68,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:10:50.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:10:50.686: INFO: Creating ReplicaSet my-hostname-basic-23c01599-a0e6-45d4-ba8b-7a3ce7f02693 May 8 21:10:50.742: INFO: Pod name my-hostname-basic-23c01599-a0e6-45d4-ba8b-7a3ce7f02693: Found 0 pods out of 1 May 8 21:10:55.747: INFO: Pod name my-hostname-basic-23c01599-a0e6-45d4-ba8b-7a3ce7f02693: Found 1 pods out of 1 May 8 21:10:55.747: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-23c01599-a0e6-45d4-ba8b-7a3ce7f02693" is running May 8 21:10:55.750: INFO: Pod "my-hostname-basic-23c01599-a0e6-45d4-ba8b-7a3ce7f02693-w98hg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 21:10:51 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 21:10:55 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 21:10:55 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 21:10:50 +0000 UTC Reason: Message:}]) May 8 21:10:55.750: INFO: Trying to dial the pod May 8 21:11:00.760: INFO: Controller my-hostname-basic-23c01599-a0e6-45d4-ba8b-7a3ce7f02693: Got expected result from replica 1 [my-hostname-basic-23c01599-a0e6-45d4-ba8b-7a3ce7f02693-w98hg]: "my-hostname-basic-23c01599-a0e6-45d4-ba8b-7a3ce7f02693-w98hg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:11:00.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2221" for this suite. • [SLOW TEST:10.688 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":9,"skipped":78,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:11:00.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 8 21:11:00.901: INFO: Waiting up to 5m0s for pod "pod-6be5ef18-5c11-4ebe-9b18-f5ecddaf72ee" in namespace "emptydir-4961" to be "success or failure" May 8 21:11:00.928: INFO: Pod "pod-6be5ef18-5c11-4ebe-9b18-f5ecddaf72ee": Phase="Pending", Reason="", readiness=false. Elapsed: 27.516719ms May 8 21:11:02.932: INFO: Pod "pod-6be5ef18-5c11-4ebe-9b18-f5ecddaf72ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031361405s May 8 21:11:04.936: INFO: Pod "pod-6be5ef18-5c11-4ebe-9b18-f5ecddaf72ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035622484s STEP: Saw pod success May 8 21:11:04.937: INFO: Pod "pod-6be5ef18-5c11-4ebe-9b18-f5ecddaf72ee" satisfied condition "success or failure" May 8 21:11:04.939: INFO: Trying to get logs from node jerma-worker pod pod-6be5ef18-5c11-4ebe-9b18-f5ecddaf72ee container test-container: STEP: delete the pod May 8 21:11:04.966: INFO: Waiting for pod pod-6be5ef18-5c11-4ebe-9b18-f5ecddaf72ee to disappear May 8 21:11:04.970: INFO: Pod pod-6be5ef18-5c11-4ebe-9b18-f5ecddaf72ee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:11:04.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4961" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":127,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:11:04.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 21:11:05.746: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 21:11:07.755: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724569065, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724569065, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724569065, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724569065, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 21:11:09.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724569065, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724569065, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724569065, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724569065, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 21:11:12.856: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:11:12.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2212-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:11:13.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6867" for this suite. STEP: Destroying namespace "webhook-6867-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.832 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":11,"skipped":132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:11:13.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 8 21:11:13.888: INFO: Waiting up to 5m0s for pod "client-containers-bf586185-a1a6-4f4f-acac-72ad51873828" in namespace "containers-4545" to be "success or failure" May 8 21:11:13.892: INFO: Pod "client-containers-bf586185-a1a6-4f4f-acac-72ad51873828": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03679ms May 8 21:11:15.896: INFO: Pod "client-containers-bf586185-a1a6-4f4f-acac-72ad51873828": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008228183s May 8 21:11:17.900: INFO: Pod "client-containers-bf586185-a1a6-4f4f-acac-72ad51873828": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012195299s STEP: Saw pod success May 8 21:11:17.900: INFO: Pod "client-containers-bf586185-a1a6-4f4f-acac-72ad51873828" satisfied condition "success or failure" May 8 21:11:17.904: INFO: Trying to get logs from node jerma-worker pod client-containers-bf586185-a1a6-4f4f-acac-72ad51873828 container test-container: STEP: delete the pod May 8 21:11:17.924: INFO: Waiting for pod client-containers-bf586185-a1a6-4f4f-acac-72ad51873828 to disappear May 8 21:11:17.929: INFO: Pod client-containers-bf586185-a1a6-4f4f-acac-72ad51873828 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:11:17.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4545" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":170,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:11:17.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 21:11:18.059: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9058095e-0430-4d65-ba23-8aadcdc917d4" in namespace "downward-api-9008" to be "success or failure" May 8 21:11:18.067: INFO: Pod "downwardapi-volume-9058095e-0430-4d65-ba23-8aadcdc917d4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.959834ms May 8 21:11:20.079: INFO: Pod "downwardapi-volume-9058095e-0430-4d65-ba23-8aadcdc917d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020145835s May 8 21:11:22.085: INFO: Pod "downwardapi-volume-9058095e-0430-4d65-ba23-8aadcdc917d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026527182s STEP: Saw pod success May 8 21:11:22.086: INFO: Pod "downwardapi-volume-9058095e-0430-4d65-ba23-8aadcdc917d4" satisfied condition "success or failure" May 8 21:11:22.088: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9058095e-0430-4d65-ba23-8aadcdc917d4 container client-container: STEP: delete the pod May 8 21:11:22.104: INFO: Waiting for pod downwardapi-volume-9058095e-0430-4d65-ba23-8aadcdc917d4 to disappear May 8 21:11:22.109: INFO: Pod downwardapi-volume-9058095e-0430-4d65-ba23-8aadcdc917d4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:11:22.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9008" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":185,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:11:22.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 21:11:22.195: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5360c404-290a-44d3-ad21-97d65699c67a" in namespace "downward-api-3375" to be "success or failure" May 8 21:11:22.215: INFO: Pod "downwardapi-volume-5360c404-290a-44d3-ad21-97d65699c67a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.518773ms May 8 21:11:24.235: INFO: Pod "downwardapi-volume-5360c404-290a-44d3-ad21-97d65699c67a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040031741s May 8 21:11:26.240: INFO: Pod "downwardapi-volume-5360c404-290a-44d3-ad21-97d65699c67a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045019733s STEP: Saw pod success May 8 21:11:26.240: INFO: Pod "downwardapi-volume-5360c404-290a-44d3-ad21-97d65699c67a" satisfied condition "success or failure" May 8 21:11:26.248: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5360c404-290a-44d3-ad21-97d65699c67a container client-container: STEP: delete the pod May 8 21:11:26.307: INFO: Waiting for pod downwardapi-volume-5360c404-290a-44d3-ad21-97d65699c67a to disappear May 8 21:11:26.318: INFO: Pod downwardapi-volume-5360c404-290a-44d3-ad21-97d65699c67a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:11:26.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3375" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":223,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:11:26.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0508 21:11:27.458046 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 21:11:27.458: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:11:27.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3192" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":15,"skipped":250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:11:27.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 8 21:11:35.643: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 21:11:35.655: INFO: Pod pod-with-prestop-exec-hook still exists May 8 21:11:37.655: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 21:11:37.659: INFO: Pod pod-with-prestop-exec-hook still exists May 8 21:11:39.655: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 21:11:39.659: INFO: Pod pod-with-prestop-exec-hook still exists May 8 21:11:41.655: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 21:11:41.659: INFO: Pod pod-with-prestop-exec-hook still exists May 8 21:11:43.655: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 21:11:43.677: INFO: Pod pod-with-prestop-exec-hook still exists May 8 21:11:45.655: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 21:11:45.670: INFO: Pod pod-with-prestop-exec-hook still exists May 8 21:11:47.655: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 21:11:47.659: INFO: Pod pod-with-prestop-exec-hook still exists May 8 21:11:49.655: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 21:11:49.658: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:11:49.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6987" for this suite. • [SLOW TEST:22.170 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":294,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:11:49.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 8 21:11:49.784: INFO: Waiting up to 5m0s for pod "pod-7158726c-3680-4136-93fb-ea042c964852" in namespace "emptydir-5726" to be "success or failure" May 8 21:11:49.811: INFO: Pod "pod-7158726c-3680-4136-93fb-ea042c964852": Phase="Pending", Reason="", readiness=false. Elapsed: 27.320301ms May 8 21:11:51.839: INFO: Pod "pod-7158726c-3680-4136-93fb-ea042c964852": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054973704s May 8 21:11:53.846: INFO: Pod "pod-7158726c-3680-4136-93fb-ea042c964852": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062746552s STEP: Saw pod success May 8 21:11:53.847: INFO: Pod "pod-7158726c-3680-4136-93fb-ea042c964852" satisfied condition "success or failure" May 8 21:11:53.849: INFO: Trying to get logs from node jerma-worker pod pod-7158726c-3680-4136-93fb-ea042c964852 container test-container: STEP: delete the pod May 8 21:11:53.876: INFO: Waiting for pod pod-7158726c-3680-4136-93fb-ea042c964852 to disappear May 8 21:11:53.896: INFO: Pod pod-7158726c-3680-4136-93fb-ea042c964852 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:11:53.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5726" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":306,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:11:53.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-16f7506c-227d-49ee-a671-91947c153466 STEP: Creating a pod to test consume configMaps May 8 21:11:53.962: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fb549a62-79bb-4ed4-a8ae-95cff4359a01" in namespace "projected-2273" to be "success or failure" May 8 21:11:53.966: INFO: Pod "pod-projected-configmaps-fb549a62-79bb-4ed4-a8ae-95cff4359a01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024384ms May 8 21:11:55.976: INFO: Pod "pod-projected-configmaps-fb549a62-79bb-4ed4-a8ae-95cff4359a01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013926817s May 8 21:11:57.984: INFO: Pod "pod-projected-configmaps-fb549a62-79bb-4ed4-a8ae-95cff4359a01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022093419s May 8 21:11:59.987: INFO: Pod "pod-projected-configmaps-fb549a62-79bb-4ed4-a8ae-95cff4359a01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025460094s STEP: Saw pod success May 8 21:11:59.987: INFO: Pod "pod-projected-configmaps-fb549a62-79bb-4ed4-a8ae-95cff4359a01" satisfied condition "success or failure" May 8 21:11:59.990: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-fb549a62-79bb-4ed4-a8ae-95cff4359a01 container projected-configmap-volume-test: STEP: delete the pod May 8 21:12:00.023: INFO: Waiting for pod pod-projected-configmaps-fb549a62-79bb-4ed4-a8ae-95cff4359a01 to disappear May 8 21:12:00.031: INFO: Pod pod-projected-configmaps-fb549a62-79bb-4ed4-a8ae-95cff4359a01 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:12:00.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2273" for this suite. • [SLOW TEST:6.135 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":315,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:12:00.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-cc2f0a20-6891-4acb-83c4-a04a56cd19cc STEP: Creating a pod to test consume configMaps May 8 21:12:00.129: INFO: Waiting up to 5m0s for pod "pod-configmaps-d141e544-2ccd-49d8-a6f8-e6b84bb4d6df" in namespace "configmap-1645" to be "success or failure" May 8 21:12:00.210: INFO: Pod "pod-configmaps-d141e544-2ccd-49d8-a6f8-e6b84bb4d6df": Phase="Pending", Reason="", readiness=false. Elapsed: 80.11049ms May 8 21:12:02.214: INFO: Pod "pod-configmaps-d141e544-2ccd-49d8-a6f8-e6b84bb4d6df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084387397s May 8 21:12:04.218: INFO: Pod "pod-configmaps-d141e544-2ccd-49d8-a6f8-e6b84bb4d6df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088459212s STEP: Saw pod success May 8 21:12:04.218: INFO: Pod "pod-configmaps-d141e544-2ccd-49d8-a6f8-e6b84bb4d6df" satisfied condition "success or failure" May 8 21:12:04.221: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-d141e544-2ccd-49d8-a6f8-e6b84bb4d6df container configmap-volume-test: STEP: delete the pod May 8 21:12:04.265: INFO: Waiting for pod pod-configmaps-d141e544-2ccd-49d8-a6f8-e6b84bb4d6df to disappear May 8 21:12:04.320: INFO: Pod pod-configmaps-d141e544-2ccd-49d8-a6f8-e6b84bb4d6df no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:12:04.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1645" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":319,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:12:04.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 8 21:12:04.466: INFO: namespace kubectl-4227 May 8 21:12:04.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4227' May 8 21:12:04.778: INFO: stderr: "" May 8 21:12:04.778: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 8 21:12:05.782: INFO: Selector matched 1 pods for map[app:agnhost] May 8 21:12:05.782: INFO: Found 0 / 1 May 8 21:12:06.857: INFO: Selector matched 1 pods for map[app:agnhost] May 8 21:12:06.857: INFO: Found 0 / 1 May 8 21:12:07.782: INFO: Selector matched 1 pods for map[app:agnhost] May 8 21:12:07.782: INFO: Found 0 / 1 May 8 21:12:08.782: INFO: Selector matched 1 pods for map[app:agnhost] May 8 21:12:08.783: INFO: Found 1 / 1 May 8 21:12:08.783: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 8 21:12:08.786: INFO: Selector matched 1 pods for map[app:agnhost] May 8 21:12:08.786: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 8 21:12:08.786: INFO: wait on agnhost-master startup in kubectl-4227 May 8 21:12:08.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-2zj2r agnhost-master --namespace=kubectl-4227' May 8 21:12:08.943: INFO: stderr: "" May 8 21:12:08.943: INFO: stdout: "Paused\n" STEP: exposing RC May 8 21:12:08.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4227' May 8 21:12:09.076: INFO: stderr: "" May 8 21:12:09.076: INFO: stdout: "service/rm2 exposed\n" May 8 21:12:09.114: INFO: Service rm2 in namespace kubectl-4227 found. STEP: exposing service May 8 21:12:11.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4227' May 8 21:12:11.272: INFO: stderr: "" May 8 21:12:11.272: INFO: stdout: "service/rm3 exposed\n" May 8 21:12:11.278: INFO: Service rm3 in namespace kubectl-4227 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:12:13.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4227" for this suite. • [SLOW TEST:8.962 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":20,"skipped":323,"failed":0} S ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:12:13.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6189, will wait for the garbage collector to delete the pods May 8 21:12:17.691: INFO: Deleting Job.batch foo took: 5.718738ms May 8 21:12:17.991: INFO: Terminating Job.batch foo pods took: 300.261363ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:12:59.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6189" for this suite. • [SLOW TEST:46.228 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":21,"skipped":324,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:12:59.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:13:06.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4311" for this suite. • [SLOW TEST:7.156 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":22,"skipped":333,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:13:06.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9384 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-9384 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9384 May 8 21:13:06.812: INFO: Found 0 stateful pods, waiting for 1 May 8 21:13:16.817: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 8 21:13:16.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 8 21:13:17.110: INFO: stderr: "I0508 21:13:16.950565 151 log.go:172] (0xc0005b22c0) (0xc0006e5900) Create stream\nI0508 21:13:16.950643 151 log.go:172] (0xc0005b22c0) (0xc0006e5900) Stream added, broadcasting: 1\nI0508 21:13:16.954331 151 log.go:172] (0xc0005b22c0) Reply frame received for 1\nI0508 21:13:16.954378 151 log.go:172] (0xc0005b22c0) (0xc0006e5ae0) Create stream\nI0508 21:13:16.954392 151 log.go:172] (0xc0005b22c0) (0xc0006e5ae0) Stream added, broadcasting: 3\nI0508 21:13:16.955573 151 log.go:172] (0xc0005b22c0) Reply frame received for 3\nI0508 21:13:16.955609 151 log.go:172] (0xc0005b22c0) (0xc000a06000) Create stream\nI0508 21:13:16.955643 151 log.go:172] (0xc0005b22c0) (0xc000a06000) Stream added, broadcasting: 5\nI0508 21:13:16.956482 151 log.go:172] (0xc0005b22c0) Reply frame received for 5\nI0508 21:13:17.033059 151 log.go:172] (0xc0005b22c0) Data frame received for 5\nI0508 21:13:17.033083 151 log.go:172] (0xc000a06000) (5) Data frame handling\nI0508 21:13:17.033099 151 log.go:172] (0xc000a06000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 21:13:17.101938 151 log.go:172] (0xc0005b22c0) Data frame received for 3\nI0508 21:13:17.101971 151 log.go:172] (0xc0006e5ae0) (3) Data frame handling\nI0508 21:13:17.101981 151 log.go:172] (0xc0006e5ae0) (3) Data frame sent\nI0508 21:13:17.101988 151 log.go:172] (0xc0005b22c0) Data frame received for 3\nI0508 21:13:17.101993 151 log.go:172] (0xc0006e5ae0) (3) Data frame handling\nI0508 21:13:17.102004 151 log.go:172] (0xc0005b22c0) Data frame received for 5\nI0508 21:13:17.102012 151 log.go:172] (0xc000a06000) (5) Data frame handling\nI0508 21:13:17.104430 151 log.go:172] (0xc0005b22c0) Data frame received for 1\nI0508 21:13:17.104466 151 log.go:172] (0xc0006e5900) (1) Data frame handling\nI0508 21:13:17.104480 151 log.go:172] (0xc0006e5900) (1) Data frame sent\nI0508 21:13:17.104510 151 log.go:172] (0xc0005b22c0) (0xc0006e5900) Stream removed, broadcasting: 1\nI0508 21:13:17.104544 151 log.go:172] (0xc0005b22c0) Go away received\nI0508 21:13:17.105010 151 log.go:172] (0xc0005b22c0) (0xc0006e5900) Stream removed, broadcasting: 1\nI0508 21:13:17.105041 151 log.go:172] (0xc0005b22c0) (0xc0006e5ae0) Stream removed, broadcasting: 3\nI0508 21:13:17.105055 151 log.go:172] (0xc0005b22c0) (0xc000a06000) Stream removed, broadcasting: 5\n" May 8 21:13:17.110: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 8 21:13:17.110: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 8 21:13:17.115: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 8 21:13:27.119: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 8 21:13:27.119: INFO: Waiting for statefulset status.replicas updated to 0 May 8 21:13:27.136: INFO: POD NODE PHASE GRACE CONDITIONS May 8 21:13:27.136: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:06 +0000 UTC }] May 8 21:13:27.136: INFO: May 8 21:13:27.136: INFO: StatefulSet ss has not reached scale 3, at 1 May 8 21:13:28.188: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993171382s May 8 21:13:29.193: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.941742325s May 8 21:13:30.205: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.935990504s May 8 21:13:31.210: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.924170734s May 8 21:13:32.214: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.919520154s May 8 21:13:33.220: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.914951766s May 8 21:13:34.225: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.90966328s May 8 21:13:35.229: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.904247438s May 8 21:13:36.234: INFO: Verifying statefulset ss doesn't scale past 3 for another 899.993175ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9384 May 8 21:13:37.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:13:37.528: INFO: stderr: "I0508 21:13:37.398305 171 log.go:172] (0xc0003266e0) (0xc0005ebe00) Create stream\nI0508 21:13:37.398377 171 log.go:172] (0xc0003266e0) (0xc0005ebe00) Stream added, broadcasting: 1\nI0508 21:13:37.401347 171 log.go:172] (0xc0003266e0) Reply frame received for 1\nI0508 21:13:37.401381 171 log.go:172] (0xc0003266e0) (0xc000960000) Create stream\nI0508 21:13:37.401390 171 log.go:172] (0xc0003266e0) (0xc000960000) Stream added, broadcasting: 3\nI0508 21:13:37.402239 171 log.go:172] (0xc0003266e0) Reply frame received for 3\nI0508 21:13:37.402300 171 log.go:172] (0xc0003266e0) (0xc0009600a0) Create stream\nI0508 21:13:37.402326 171 log.go:172] (0xc0003266e0) (0xc0009600a0) Stream added, broadcasting: 5\nI0508 21:13:37.403366 171 log.go:172] (0xc0003266e0) Reply frame received for 5\nI0508 21:13:37.521327 171 log.go:172] (0xc0003266e0) Data frame received for 3\nI0508 21:13:37.521369 171 log.go:172] (0xc000960000) (3) Data frame handling\nI0508 21:13:37.521401 171 log.go:172] (0xc000960000) (3) Data frame sent\nI0508 21:13:37.521547 171 log.go:172] (0xc0003266e0) Data frame received for 5\nI0508 21:13:37.521569 171 log.go:172] (0xc0009600a0) (5) Data frame handling\nI0508 21:13:37.521587 171 log.go:172] (0xc0009600a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0508 21:13:37.521772 171 log.go:172] (0xc0003266e0) Data frame received for 3\nI0508 21:13:37.521795 171 log.go:172] (0xc000960000) (3) Data frame handling\nI0508 21:13:37.521825 171 log.go:172] (0xc0003266e0) Data frame received for 5\nI0508 21:13:37.521858 171 log.go:172] (0xc0009600a0) (5) Data frame handling\nI0508 21:13:37.523133 171 log.go:172] (0xc0003266e0) Data frame received for 1\nI0508 21:13:37.523202 171 log.go:172] (0xc0005ebe00) (1) Data frame handling\nI0508 21:13:37.523223 171 log.go:172] (0xc0005ebe00) (1) Data frame sent\nI0508 21:13:37.523233 171 log.go:172] (0xc0003266e0) (0xc0005ebe00) Stream removed, broadcasting: 1\nI0508 21:13:37.523344 171 log.go:172] (0xc0003266e0) Go away received\nI0508 21:13:37.523489 171 log.go:172] (0xc0003266e0) (0xc0005ebe00) Stream removed, broadcasting: 1\nI0508 21:13:37.523502 171 log.go:172] (0xc0003266e0) (0xc000960000) Stream removed, broadcasting: 3\nI0508 21:13:37.523510 171 log.go:172] (0xc0003266e0) (0xc0009600a0) Stream removed, broadcasting: 5\n" May 8 21:13:37.529: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 8 21:13:37.529: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 8 21:13:37.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:13:37.752: INFO: stderr: "I0508 21:13:37.671524 192 log.go:172] (0xc0008ca000) (0xc000725540) Create stream\nI0508 21:13:37.671585 192 log.go:172] (0xc0008ca000) (0xc000725540) Stream added, broadcasting: 1\nI0508 21:13:37.674583 192 log.go:172] (0xc0008ca000) Reply frame received for 1\nI0508 21:13:37.674648 192 log.go:172] (0xc0008ca000) (0xc0008fc000) Create stream\nI0508 21:13:37.674663 192 log.go:172] (0xc0008ca000) (0xc0008fc000) Stream added, broadcasting: 3\nI0508 21:13:37.675648 192 log.go:172] (0xc0008ca000) Reply frame received for 3\nI0508 21:13:37.675689 192 log.go:172] (0xc0008ca000) (0xc000af0000) Create stream\nI0508 21:13:37.675701 192 log.go:172] (0xc0008ca000) (0xc000af0000) Stream added, broadcasting: 5\nI0508 21:13:37.676603 192 log.go:172] (0xc0008ca000) Reply frame received for 5\nI0508 21:13:37.745249 192 log.go:172] (0xc0008ca000) Data frame received for 3\nI0508 21:13:37.745293 192 log.go:172] (0xc0008fc000) (3) Data frame handling\nI0508 21:13:37.745304 192 log.go:172] (0xc0008fc000) (3) Data frame sent\nI0508 21:13:37.745315 192 log.go:172] (0xc0008ca000) Data frame received for 3\nI0508 21:13:37.745326 192 log.go:172] (0xc0008fc000) (3) Data frame handling\nI0508 21:13:37.745392 192 log.go:172] (0xc0008ca000) Data frame received for 5\nI0508 21:13:37.745430 192 log.go:172] (0xc000af0000) (5) Data frame handling\nI0508 21:13:37.745458 192 log.go:172] (0xc000af0000) (5) Data frame sent\nI0508 21:13:37.745471 192 log.go:172] (0xc0008ca000) Data frame received for 5\nI0508 21:13:37.745478 192 log.go:172] (0xc000af0000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0508 21:13:37.746912 192 log.go:172] (0xc0008ca000) Data frame received for 1\nI0508 21:13:37.746939 192 log.go:172] (0xc000725540) (1) Data frame handling\nI0508 21:13:37.746954 192 log.go:172] (0xc000725540) (1) Data frame sent\nI0508 21:13:37.746973 192 log.go:172] (0xc0008ca000) (0xc000725540) Stream removed, broadcasting: 1\nI0508 21:13:37.747004 192 log.go:172] (0xc0008ca000) Go away received\nI0508 21:13:37.747333 192 log.go:172] (0xc0008ca000) (0xc000725540) Stream removed, broadcasting: 1\nI0508 21:13:37.747353 192 log.go:172] (0xc0008ca000) (0xc0008fc000) Stream removed, broadcasting: 3\nI0508 21:13:37.747361 192 log.go:172] (0xc0008ca000) (0xc000af0000) Stream removed, broadcasting: 5\n" May 8 21:13:37.752: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 8 21:13:37.752: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 8 21:13:37.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:13:37.949: INFO: stderr: "I0508 21:13:37.871989 212 log.go:172] (0xc000a794a0) (0xc000ac2460) Create stream\nI0508 21:13:37.872040 212 log.go:172] (0xc000a794a0) (0xc000ac2460) Stream added, broadcasting: 1\nI0508 21:13:37.877925 212 log.go:172] (0xc000a794a0) Reply frame received for 1\nI0508 21:13:37.877986 212 log.go:172] (0xc000a794a0) (0xc00071a5a0) Create stream\nI0508 21:13:37.878014 212 log.go:172] (0xc000a794a0) (0xc00071a5a0) Stream added, broadcasting: 3\nI0508 21:13:37.879150 212 log.go:172] (0xc000a794a0) Reply frame received for 3\nI0508 21:13:37.879192 212 log.go:172] (0xc000a794a0) (0xc000573360) Create stream\nI0508 21:13:37.879205 212 log.go:172] (0xc000a794a0) (0xc000573360) Stream added, broadcasting: 5\nI0508 21:13:37.880161 212 log.go:172] (0xc000a794a0) Reply frame received for 5\nI0508 21:13:37.942507 212 log.go:172] (0xc000a794a0) Data frame received for 5\nI0508 21:13:37.942569 212 log.go:172] (0xc000a794a0) Data frame received for 3\nI0508 21:13:37.942614 212 log.go:172] (0xc00071a5a0) (3) Data frame handling\nI0508 21:13:37.942645 212 log.go:172] (0xc00071a5a0) (3) Data frame sent\nI0508 21:13:37.942667 212 log.go:172] (0xc000a794a0) Data frame received for 3\nI0508 21:13:37.942684 212 log.go:172] (0xc00071a5a0) (3) Data frame handling\nI0508 21:13:37.942730 212 log.go:172] (0xc000573360) (5) Data frame handling\nI0508 21:13:37.942771 212 log.go:172] (0xc000573360) (5) Data frame sent\nI0508 21:13:37.942788 212 log.go:172] (0xc000a794a0) Data frame received for 5\nI0508 21:13:37.942797 212 log.go:172] (0xc000573360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0508 21:13:37.944333 212 log.go:172] (0xc000a794a0) Data frame received for 1\nI0508 21:13:37.944362 212 log.go:172] (0xc000ac2460) (1) Data frame handling\nI0508 21:13:37.944389 212 log.go:172] (0xc000ac2460) (1) Data frame sent\nI0508 21:13:37.944581 212 log.go:172] (0xc000a794a0) (0xc000ac2460) Stream removed, broadcasting: 1\nI0508 21:13:37.944622 212 log.go:172] (0xc000a794a0) Go away received\nI0508 21:13:37.945027 212 log.go:172] (0xc000a794a0) (0xc000ac2460) Stream removed, broadcasting: 1\nI0508 21:13:37.945050 212 log.go:172] (0xc000a794a0) (0xc00071a5a0) Stream removed, broadcasting: 3\nI0508 21:13:37.945063 212 log.go:172] (0xc000a794a0) (0xc000573360) Stream removed, broadcasting: 5\n" May 8 21:13:37.950: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 8 21:13:37.950: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 8 21:13:37.954: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 8 21:13:47.958: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 8 21:13:47.958: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 8 21:13:47.958: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 8 21:13:47.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 8 21:13:48.163: INFO: stderr: "I0508 21:13:48.094877 233 log.go:172] (0xc0000f51e0) (0xc0005d4140) Create stream\nI0508 21:13:48.094937 233 log.go:172] (0xc0000f51e0) (0xc0005d4140) Stream added, broadcasting: 1\nI0508 21:13:48.098084 233 log.go:172] (0xc0000f51e0) Reply frame received for 1\nI0508 21:13:48.098125 233 log.go:172] (0xc0000f51e0) (0xc000b0c000) Create stream\nI0508 21:13:48.098142 233 log.go:172] (0xc0000f51e0) (0xc000b0c000) Stream added, broadcasting: 3\nI0508 21:13:48.099283 233 log.go:172] (0xc0000f51e0) Reply frame received for 3\nI0508 21:13:48.099323 233 log.go:172] (0xc0000f51e0) (0xc000b0c0a0) Create stream\nI0508 21:13:48.099336 233 log.go:172] (0xc0000f51e0) (0xc000b0c0a0) Stream added, broadcasting: 5\nI0508 21:13:48.100390 233 log.go:172] (0xc0000f51e0) Reply frame received for 5\nI0508 21:13:48.157040 233 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0508 21:13:48.157087 233 log.go:172] (0xc000b0c0a0) (5) Data frame handling\nI0508 21:13:48.157104 233 log.go:172] (0xc000b0c0a0) (5) Data frame sent\nI0508 21:13:48.157308 233 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0508 21:13:48.157329 233 log.go:172] (0xc000b0c000) (3) Data frame handling\nI0508 21:13:48.157343 233 log.go:172] (0xc000b0c000) (3) Data frame sent\nI0508 21:13:48.157355 233 log.go:172] (0xc0000f51e0) Data frame received for 3\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 21:13:48.157369 233 log.go:172] (0xc000b0c000) (3) Data frame handling\nI0508 21:13:48.157476 233 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0508 21:13:48.157508 233 log.go:172] (0xc000b0c0a0) (5) Data frame handling\nI0508 21:13:48.158944 233 log.go:172] (0xc0000f51e0) Data frame received for 1\nI0508 21:13:48.158987 233 log.go:172] (0xc0005d4140) (1) Data frame handling\nI0508 21:13:48.159027 233 log.go:172] (0xc0005d4140) (1) Data frame sent\nI0508 21:13:48.159068 233 log.go:172] (0xc0000f51e0) (0xc0005d4140) Stream removed, broadcasting: 1\nI0508 21:13:48.159101 233 log.go:172] (0xc0000f51e0) Go away received\nI0508 21:13:48.159534 233 log.go:172] (0xc0000f51e0) (0xc0005d4140) Stream removed, broadcasting: 1\nI0508 21:13:48.159550 233 log.go:172] (0xc0000f51e0) (0xc000b0c000) Stream removed, broadcasting: 3\nI0508 21:13:48.159561 233 log.go:172] (0xc0000f51e0) (0xc000b0c0a0) Stream removed, broadcasting: 5\n" May 8 21:13:48.163: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 8 21:13:48.163: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 8 21:13:48.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 8 21:13:48.386: INFO: stderr: "I0508 21:13:48.282880 253 log.go:172] (0xc0001046e0) (0xc000745540) Create stream\nI0508 21:13:48.282942 253 log.go:172] (0xc0001046e0) (0xc000745540) Stream added, broadcasting: 1\nI0508 21:13:48.286136 253 log.go:172] (0xc0001046e0) Reply frame received for 1\nI0508 21:13:48.286169 253 log.go:172] (0xc0001046e0) (0xc000990000) Create stream\nI0508 21:13:48.286178 253 log.go:172] (0xc0001046e0) (0xc000990000) Stream added, broadcasting: 3\nI0508 21:13:48.287183 253 log.go:172] (0xc0001046e0) Reply frame received for 3\nI0508 21:13:48.287226 253 log.go:172] (0xc0001046e0) (0xc00068bae0) Create stream\nI0508 21:13:48.287244 253 log.go:172] (0xc0001046e0) (0xc00068bae0) Stream added, broadcasting: 5\nI0508 21:13:48.288304 253 log.go:172] (0xc0001046e0) Reply frame received for 5\nI0508 21:13:48.348397 253 log.go:172] (0xc0001046e0) Data frame received for 5\nI0508 21:13:48.348422 253 log.go:172] (0xc00068bae0) (5) Data frame handling\nI0508 21:13:48.348436 253 log.go:172] (0xc00068bae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 21:13:48.378642 253 log.go:172] (0xc0001046e0) Data frame received for 3\nI0508 21:13:48.378759 253 log.go:172] (0xc000990000) (3) Data frame handling\nI0508 21:13:48.378799 253 log.go:172] (0xc000990000) (3) Data frame sent\nI0508 21:13:48.378831 253 log.go:172] (0xc0001046e0) Data frame received for 3\nI0508 21:13:48.378944 253 log.go:172] (0xc000990000) (3) Data frame handling\nI0508 21:13:48.378967 253 log.go:172] (0xc0001046e0) Data frame received for 5\nI0508 21:13:48.378973 253 log.go:172] (0xc00068bae0) (5) Data frame handling\nI0508 21:13:48.380701 253 log.go:172] (0xc0001046e0) Data frame received for 1\nI0508 21:13:48.380739 253 log.go:172] (0xc000745540) (1) Data frame handling\nI0508 21:13:48.380761 253 log.go:172] (0xc000745540) (1) Data frame sent\nI0508 21:13:48.380775 253 log.go:172] (0xc0001046e0) (0xc000745540) Stream removed, broadcasting: 1\nI0508 21:13:48.380792 253 log.go:172] (0xc0001046e0) Go away received\nI0508 21:13:48.381233 253 log.go:172] (0xc0001046e0) (0xc000745540) Stream removed, broadcasting: 1\nI0508 21:13:48.381250 253 log.go:172] (0xc0001046e0) (0xc000990000) Stream removed, broadcasting: 3\nI0508 21:13:48.381256 253 log.go:172] (0xc0001046e0) (0xc00068bae0) Stream removed, broadcasting: 5\n" May 8 21:13:48.386: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 8 21:13:48.386: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 8 21:13:48.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 8 21:13:48.655: INFO: stderr: "I0508 21:13:48.536675 276 log.go:172] (0xc000990000) (0xc000488000) Create stream\nI0508 21:13:48.536769 276 log.go:172] (0xc000990000) (0xc000488000) Stream added, broadcasting: 1\nI0508 21:13:48.539417 276 log.go:172] (0xc000990000) Reply frame received for 1\nI0508 21:13:48.539464 276 log.go:172] (0xc000990000) (0xc000395540) Create stream\nI0508 21:13:48.539496 276 log.go:172] (0xc000990000) (0xc000395540) Stream added, broadcasting: 3\nI0508 21:13:48.540643 276 log.go:172] (0xc000990000) Reply frame received for 3\nI0508 21:13:48.540690 276 log.go:172] (0xc000990000) (0xc000488140) Create stream\nI0508 21:13:48.540702 276 log.go:172] (0xc000990000) (0xc000488140) Stream added, broadcasting: 5\nI0508 21:13:48.541893 276 log.go:172] (0xc000990000) Reply frame received for 5\nI0508 21:13:48.600592 276 log.go:172] (0xc000990000) Data frame received for 5\nI0508 21:13:48.600621 276 log.go:172] (0xc000488140) (5) Data frame handling\nI0508 21:13:48.600643 276 log.go:172] (0xc000488140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 21:13:48.647677 276 log.go:172] (0xc000990000) Data frame received for 3\nI0508 21:13:48.647722 276 log.go:172] (0xc000395540) (3) Data frame handling\nI0508 21:13:48.647740 276 log.go:172] (0xc000395540) (3) Data frame sent\nI0508 21:13:48.647809 276 log.go:172] (0xc000990000) Data frame received for 3\nI0508 21:13:48.647853 276 log.go:172] (0xc000395540) (3) Data frame handling\nI0508 21:13:48.647943 276 log.go:172] (0xc000990000) Data frame received for 5\nI0508 21:13:48.647963 276 log.go:172] (0xc000488140) (5) Data frame handling\nI0508 21:13:48.650128 276 log.go:172] (0xc000990000) Data frame received for 1\nI0508 21:13:48.650157 276 log.go:172] (0xc000488000) (1) Data frame handling\nI0508 21:13:48.650179 276 log.go:172] (0xc000488000) (1) Data frame sent\nI0508 21:13:48.650193 276 log.go:172] (0xc000990000) (0xc000488000) Stream removed, broadcasting: 1\nI0508 21:13:48.650206 276 log.go:172] (0xc000990000) Go away received\nI0508 21:13:48.650525 276 log.go:172] (0xc000990000) (0xc000488000) Stream removed, broadcasting: 1\nI0508 21:13:48.650540 276 log.go:172] (0xc000990000) (0xc000395540) Stream removed, broadcasting: 3\nI0508 21:13:48.650547 276 log.go:172] (0xc000990000) (0xc000488140) Stream removed, broadcasting: 5\n" May 8 21:13:48.655: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 8 21:13:48.655: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 8 21:13:48.655: INFO: Waiting for statefulset status.replicas updated to 0 May 8 21:13:48.658: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 8 21:13:58.671: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 8 21:13:58.671: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 8 21:13:58.671: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 8 21:13:58.688: INFO: POD NODE PHASE GRACE CONDITIONS May 8 21:13:58.688: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:06 +0000 UTC }] May 8 21:13:58.688: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:13:58.688: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:13:58.688: INFO: May 8 21:13:58.688: INFO: StatefulSet ss has not reached scale 0, at 3 May 8 21:13:59.805: INFO: POD NODE PHASE GRACE CONDITIONS May 8 21:13:59.805: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:06 +0000 UTC }] May 8 21:13:59.805: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:13:59.805: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:13:59.805: INFO: May 8 21:13:59.805: INFO: StatefulSet ss has not reached scale 0, at 3 May 8 21:14:00.810: INFO: POD NODE PHASE GRACE CONDITIONS May 8 21:14:00.810: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:06 +0000 UTC }] May 8 21:14:00.810: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:00.811: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:00.811: INFO: May 8 21:14:00.811: INFO: StatefulSet ss has not reached scale 0, at 3 May 8 21:14:01.815: INFO: POD NODE PHASE GRACE CONDITIONS May 8 21:14:01.815: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:01.815: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:01.815: INFO: May 8 21:14:01.815: INFO: StatefulSet ss has not reached scale 0, at 2 May 8 21:14:02.820: INFO: POD NODE PHASE GRACE CONDITIONS May 8 21:14:02.820: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:02.820: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:02.820: INFO: May 8 21:14:02.820: INFO: StatefulSet ss has not reached scale 0, at 2 May 8 21:14:03.826: INFO: POD NODE PHASE GRACE CONDITIONS May 8 21:14:03.826: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:03.826: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:03.826: INFO: May 8 21:14:03.826: INFO: StatefulSet ss has not reached scale 0, at 2 May 8 21:14:04.832: INFO: POD NODE PHASE GRACE CONDITIONS May 8 21:14:04.833: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:04.833: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:04.833: INFO: May 8 21:14:04.833: INFO: StatefulSet ss has not reached scale 0, at 2 May 8 21:14:05.838: INFO: POD NODE PHASE GRACE CONDITIONS May 8 21:14:05.838: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:05.838: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:05.838: INFO: May 8 21:14:05.838: INFO: StatefulSet ss has not reached scale 0, at 2 May 8 21:14:06.843: INFO: POD NODE PHASE GRACE CONDITIONS May 8 21:14:06.843: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:06.843: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:06.843: INFO: May 8 21:14:06.843: INFO: StatefulSet ss has not reached scale 0, at 2 May 8 21:14:07.847: INFO: POD NODE PHASE GRACE CONDITIONS May 8 21:14:07.847: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:07.847: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 21:13:27 +0000 UTC }] May 8 21:14:07.847: INFO: May 8 21:14:07.847: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9384 May 8 21:14:08.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:14:08.973: INFO: rc: 1 May 8 21:14:08.973: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 8 21:14:18.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:14:19.068: INFO: rc: 1 May 8 21:14:19.068: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:14:29.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:14:29.167: INFO: rc: 1 May 8 21:14:29.168: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:14:39.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:14:39.268: INFO: rc: 1 May 8 21:14:39.268: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:14:49.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:14:49.372: INFO: rc: 1 May 8 21:14:49.372: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:14:59.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:14:59.484: INFO: rc: 1 May 8 21:14:59.484: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:15:09.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:15:09.589: INFO: rc: 1 May 8 21:15:09.589: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:15:19.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:15:19.683: INFO: rc: 1 May 8 21:15:19.683: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:15:29.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:15:29.793: INFO: rc: 1 May 8 21:15:29.793: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:15:39.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:15:39.906: INFO: rc: 1 May 8 21:15:39.907: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:15:49.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:15:49.999: INFO: rc: 1 May 8 21:15:50.000: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:16:00.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:16:00.094: INFO: rc: 1 May 8 21:16:00.094: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:16:10.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:16:10.186: INFO: rc: 1 May 8 21:16:10.186: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:16:20.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:16:20.281: INFO: rc: 1 May 8 21:16:20.281: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:16:30.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:16:30.393: INFO: rc: 1 May 8 21:16:30.393: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:16:40.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:16:40.497: INFO: rc: 1 May 8 21:16:40.498: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:16:50.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:16:50.603: INFO: rc: 1 May 8 21:16:50.603: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:17:00.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:17:00.707: INFO: rc: 1 May 8 21:17:00.707: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:17:10.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:17:10.815: INFO: rc: 1 May 8 21:17:10.816: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:17:20.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:17:20.920: INFO: rc: 1 May 8 21:17:20.920: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:17:30.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:17:31.018: INFO: rc: 1 May 8 21:17:31.018: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:17:41.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:17:41.120: INFO: rc: 1 May 8 21:17:41.120: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:17:51.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:17:51.224: INFO: rc: 1 May 8 21:17:51.224: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:18:01.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:18:01.334: INFO: rc: 1 May 8 21:18:01.334: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:18:11.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:18:11.447: INFO: rc: 1 May 8 21:18:11.447: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:18:21.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:18:21.540: INFO: rc: 1 May 8 21:18:21.540: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:18:31.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:18:31.648: INFO: rc: 1 May 8 21:18:31.648: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:18:41.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:18:41.755: INFO: rc: 1 May 8 21:18:41.756: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:18:51.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:18:51.861: INFO: rc: 1 May 8 21:18:51.861: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:19:01.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:19:01.957: INFO: rc: 1 May 8 21:19:01.957: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 21:19:11.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:19:12.060: INFO: rc: 1 May 8 21:19:12.060: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: May 8 21:19:12.061: INFO: Scaling statefulset ss to 0 May 8 21:19:12.068: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 8 21:19:12.070: INFO: Deleting all statefulset in ns statefulset-9384 May 8 21:19:12.072: INFO: Scaling statefulset ss to 0 May 8 21:19:12.079: INFO: Waiting for statefulset status.replicas updated to 0 May 8 21:19:12.081: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:19:12.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9384" for this suite. • [SLOW TEST:365.422 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":23,"skipped":343,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:19:12.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 8 21:19:12.164: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 21:19:12.217: INFO: Waiting for terminating namespaces to be deleted... May 8 21:19:12.219: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 8 21:19:12.239: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 21:19:12.239: INFO: Container kindnet-cni ready: true, restart count 0 May 8 21:19:12.239: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 21:19:12.239: INFO: Container kube-proxy ready: true, restart count 0 May 8 21:19:12.239: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 8 21:19:12.257: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 21:19:12.258: INFO: Container kube-proxy ready: true, restart count 0 May 8 21:19:12.258: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 8 21:19:12.258: INFO: Container kube-hunter ready: false, restart count 0 May 8 21:19:12.258: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 21:19:12.258: INFO: Container kindnet-cni ready: true, restart count 0 May 8 21:19:12.258: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 8 21:19:12.258: INFO: Container kube-bench ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 8 21:19:12.353: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 8 21:19:12.353: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 8 21:19:12.353: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 8 21:19:12.353: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 8 21:19:12.353: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 8 21:19:12.365: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-302c179a-520d-4621-90dd-d00345322cf5.160d2a4cb025b7cc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4986/filler-pod-302c179a-520d-4621-90dd-d00345322cf5 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-302c179a-520d-4621-90dd-d00345322cf5.160d2a4d3e2a84c2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-302c179a-520d-4621-90dd-d00345322cf5.160d2a4d7eb1a8d7], Reason = [Created], Message = [Created container filler-pod-302c179a-520d-4621-90dd-d00345322cf5] STEP: Considering event: Type = [Normal], Name = [filler-pod-302c179a-520d-4621-90dd-d00345322cf5.160d2a4d8cdcbab1], Reason = [Started], Message = [Started container filler-pod-302c179a-520d-4621-90dd-d00345322cf5] STEP: Considering event: Type = [Normal], Name = [filler-pod-56af710f-1e8d-49a9-b24e-0e9332a99198.160d2a4caf93db02], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4986/filler-pod-56af710f-1e8d-49a9-b24e-0e9332a99198 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-56af710f-1e8d-49a9-b24e-0e9332a99198.160d2a4cfa7a27e9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-56af710f-1e8d-49a9-b24e-0e9332a99198.160d2a4d51fa9210], Reason = [Created], Message = [Created container filler-pod-56af710f-1e8d-49a9-b24e-0e9332a99198] STEP: Considering event: Type = [Normal], Name = [filler-pod-56af710f-1e8d-49a9-b24e-0e9332a99198.160d2a4d7514afad], Reason = [Started], Message = [Started container filler-pod-56af710f-1e8d-49a9-b24e-0e9332a99198] STEP: Considering event: Type = [Warning], Name = [additional-pod.160d2a4d9f925d7a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:19:17.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4986" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.879 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":24,"skipped":344,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:19:17.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:19:18.140: INFO: Create a RollingUpdate DaemonSet May 8 21:19:18.144: INFO: Check that daemon pods launch on every node of the cluster May 8 21:19:18.187: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 21:19:18.193: INFO: Number of nodes with available pods: 0 May 8 21:19:18.193: INFO: Node jerma-worker is running more than one daemon pod May 8 21:19:19.198: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 21:19:19.202: INFO: Number of nodes with available pods: 0 May 8 21:19:19.202: INFO: Node jerma-worker is running more than one daemon pod May 8 21:19:20.319: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 21:19:20.322: INFO: Number of nodes with available pods: 0 May 8 21:19:20.322: INFO: Node jerma-worker is running more than one daemon pod May 8 21:19:21.205: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 21:19:21.213: INFO: Number of nodes with available pods: 0 May 8 21:19:21.213: INFO: Node jerma-worker is running more than one daemon pod May 8 21:19:22.198: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 21:19:22.201: INFO: Number of nodes with available pods: 2 May 8 21:19:22.202: INFO: Number of running nodes: 2, number of available pods: 2 May 8 21:19:22.202: INFO: Update the DaemonSet to trigger a rollout May 8 21:19:22.207: INFO: Updating DaemonSet daemon-set May 8 21:19:40.230: INFO: Roll back the DaemonSet before rollout is complete May 8 21:19:40.237: INFO: Updating DaemonSet daemon-set May 8 21:19:40.237: INFO: Make sure DaemonSet rollback is complete May 8 21:19:40.264: INFO: Wrong image for pod: daemon-set-78795. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 8 21:19:40.264: INFO: Pod daemon-set-78795 is not available May 8 21:19:40.268: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 21:19:41.272: INFO: Wrong image for pod: daemon-set-78795. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 8 21:19:41.272: INFO: Pod daemon-set-78795 is not available May 8 21:19:41.276: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 21:19:42.278: INFO: Pod daemon-set-9wv6r is not available May 8 21:19:42.282: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5862, will wait for the garbage collector to delete the pods May 8 21:19:42.366: INFO: Deleting DaemonSet.extensions daemon-set took: 25.752944ms May 8 21:19:42.766: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.26223ms May 8 21:19:49.269: INFO: Number of nodes with available pods: 0 May 8 21:19:49.269: INFO: Number of running nodes: 0, number of available pods: 0 May 8 21:19:49.272: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5862/daemonsets","resourceVersion":"14531465"},"items":null} May 8 21:19:49.274: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5862/pods","resourceVersion":"14531465"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:19:49.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5862" for this suite. • [SLOW TEST:31.313 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":25,"skipped":349,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:19:49.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:19:49.379: INFO: Creating deployment "test-recreate-deployment" May 8 21:19:49.392: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 8 21:19:49.404: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 8 21:19:51.411: INFO: Waiting deployment "test-recreate-deployment" to complete May 8 21:19:51.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724569589, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724569589, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724569589, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724569589, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 21:19:53.417: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 8 21:19:53.423: INFO: Updating deployment test-recreate-deployment May 8 21:19:53.423: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 8 21:19:54.067: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2943 /apis/apps/v1/namespaces/deployment-2943/deployments/test-recreate-deployment ea704e04-4b66-4480-9522-51d9d8665fe1 14531521 2 2020-05-08 21:19:49 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002923eb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-08 21:19:53 +0000 UTC,LastTransitionTime:2020-05-08 21:19:53 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-08 21:19:53 +0000 UTC,LastTransitionTime:2020-05-08 21:19:49 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 8 21:19:54.071: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-2943 /apis/apps/v1/namespaces/deployment-2943/replicasets/test-recreate-deployment-5f94c574ff 768d4c74-b5bb-484e-b1fb-f4040a9ec8ff 14531519 1 2020-05-08 21:19:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment ea704e04-4b66-4480-9522-51d9d8665fe1 0xc002746447 0xc002746448}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002746538 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 8 21:19:54.071: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 8 21:19:54.071: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-2943 /apis/apps/v1/namespaces/deployment-2943/replicasets/test-recreate-deployment-799c574856 487bd619-52a4-4f3b-9092-01e52467f809 14531510 2 2020-05-08 21:19:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment ea704e04-4b66-4480-9522-51d9d8665fe1 0xc0027465a7 0xc0027465a8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002746618 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 8 21:19:54.112: INFO: Pod "test-recreate-deployment-5f94c574ff-zw6gt" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-zw6gt test-recreate-deployment-5f94c574ff- deployment-2943 /api/v1/namespaces/deployment-2943/pods/test-recreate-deployment-5f94c574ff-zw6gt 1b418892-d9ae-4e48-8fb4-846546fde235 14531522 0 2020-05-08 21:19:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 768d4c74-b5bb-484e-b1fb-f4040a9ec8ff 0xc002746be7 0xc002746be8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4nxkq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4nxkq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4nxkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 21:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 21:19:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 21:19:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 21:19:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-08 21:19:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:19:54.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2943" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":26,"skipped":397,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:19:54.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 8 21:19:54.271: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:20:09.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3635" for this suite. • [SLOW TEST:15.161 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:20:09.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7170 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7170 STEP: Creating statefulset with conflicting port in namespace statefulset-7170 STEP: Waiting until pod test-pod will start running in namespace statefulset-7170 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7170 May 8 21:20:15.507: INFO: Observed stateful pod in namespace: statefulset-7170, name: ss-0, uid: c129f6b4-44f8-4d55-a68d-fafc6d5dbe29, status phase: Pending. Waiting for statefulset controller to delete. May 8 21:20:16.006: INFO: Observed stateful pod in namespace: statefulset-7170, name: ss-0, uid: c129f6b4-44f8-4d55-a68d-fafc6d5dbe29, status phase: Failed. Waiting for statefulset controller to delete. May 8 21:20:16.068: INFO: Observed stateful pod in namespace: statefulset-7170, name: ss-0, uid: c129f6b4-44f8-4d55-a68d-fafc6d5dbe29, status phase: Failed. Waiting for statefulset controller to delete. May 8 21:20:16.079: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7170 STEP: Removing pod with conflicting port in namespace statefulset-7170 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7170 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 8 21:20:20.243: INFO: Deleting all statefulset in ns statefulset-7170 May 8 21:20:20.246: INFO: Scaling statefulset ss to 0 May 8 21:20:30.293: INFO: Waiting for statefulset status.replicas updated to 0 May 8 21:20:30.296: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:20:30.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7170" for this suite. • [SLOW TEST:21.043 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":28,"skipped":440,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:20:30.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 8 21:20:30.422: INFO: Waiting up to 5m0s for pod "downward-api-6265f676-e001-48a0-b320-2900e60d1576" in namespace "downward-api-4581" to be "success or failure" May 8 21:20:30.440: INFO: Pod "downward-api-6265f676-e001-48a0-b320-2900e60d1576": Phase="Pending", Reason="", readiness=false. Elapsed: 17.965883ms May 8 21:20:32.445: INFO: Pod "downward-api-6265f676-e001-48a0-b320-2900e60d1576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022561232s May 8 21:20:34.449: INFO: Pod "downward-api-6265f676-e001-48a0-b320-2900e60d1576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026189756s STEP: Saw pod success May 8 21:20:34.449: INFO: Pod "downward-api-6265f676-e001-48a0-b320-2900e60d1576" satisfied condition "success or failure" May 8 21:20:34.451: INFO: Trying to get logs from node jerma-worker pod downward-api-6265f676-e001-48a0-b320-2900e60d1576 container dapi-container: STEP: delete the pod May 8 21:20:34.545: INFO: Waiting for pod downward-api-6265f676-e001-48a0-b320-2900e60d1576 to disappear May 8 21:20:34.660: INFO: Pod downward-api-6265f676-e001-48a0-b320-2900e60d1576 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:20:34.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4581" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":453,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:20:34.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 8 21:20:34.727: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 21:20:34.736: INFO: Waiting for terminating namespaces to be deleted... May 8 21:20:34.738: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 8 21:20:34.742: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 21:20:34.742: INFO: Container kindnet-cni ready: true, restart count 0 May 8 21:20:34.742: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 21:20:34.742: INFO: Container kube-proxy ready: true, restart count 0 May 8 21:20:34.742: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 8 21:20:34.765: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 8 21:20:34.765: INFO: Container kube-hunter ready: false, restart count 0 May 8 21:20:34.765: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 21:20:34.765: INFO: Container kindnet-cni ready: true, restart count 0 May 8 21:20:34.765: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 8 21:20:34.765: INFO: Container kube-bench ready: false, restart count 0 May 8 21:20:34.765: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 21:20:34.765: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f65ad7b3-e94c-4884-9c1e-c3752444ef31 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-f65ad7b3-e94c-4884-9c1e-c3752444ef31 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-f65ad7b3-e94c-4884-9c1e-c3752444ef31 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:25:42.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6659" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.289 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":30,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:25:42.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:25:54.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-902" for this suite. • [SLOW TEST:11.157 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":31,"skipped":500,"failed":0} SSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:25:54.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9565.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9565.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9565.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9565.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9565.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9565.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 21:26:00.232: INFO: DNS probes using dns-9565/dns-test-12456958-27ce-47df-a947-3ea72a027ade succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:26:00.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9565" for this suite. • [SLOW TEST:6.191 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":32,"skipped":503,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:26:00.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 21:26:00.366: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b4c53ff4-c95f-4bc8-914a-2a6451f59b2c" in namespace "projected-5484" to be "success or failure" May 8 21:26:00.370: INFO: Pod "downwardapi-volume-b4c53ff4-c95f-4bc8-914a-2a6451f59b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.382035ms May 8 21:26:02.374: INFO: Pod "downwardapi-volume-b4c53ff4-c95f-4bc8-914a-2a6451f59b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007535359s May 8 21:26:04.407: INFO: Pod "downwardapi-volume-b4c53ff4-c95f-4bc8-914a-2a6451f59b2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04068628s STEP: Saw pod success May 8 21:26:04.407: INFO: Pod "downwardapi-volume-b4c53ff4-c95f-4bc8-914a-2a6451f59b2c" satisfied condition "success or failure" May 8 21:26:04.409: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b4c53ff4-c95f-4bc8-914a-2a6451f59b2c container client-container: STEP: delete the pod May 8 21:26:04.480: INFO: Waiting for pod downwardapi-volume-b4c53ff4-c95f-4bc8-914a-2a6451f59b2c to disappear May 8 21:26:05.174: INFO: Pod downwardapi-volume-b4c53ff4-c95f-4bc8-914a-2a6451f59b2c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:26:05.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5484" for this suite. • [SLOW TEST:5.459 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":504,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:26:05.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5730.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5730.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5730.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5730.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5730.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5730.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5730.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5730.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5730.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5730.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5730.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 33.236.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.236.33_udp@PTR;check="$$(dig +tcp +noall +answer +search 33.236.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.236.33_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5730.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5730.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5730.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5730.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5730.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5730.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5730.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5730.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5730.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5730.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5730.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 33.236.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.236.33_udp@PTR;check="$$(dig +tcp +noall +answer +search 33.236.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.236.33_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 21:26:12.125: INFO: Unable to read wheezy_udp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:12.128: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:12.130: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:12.134: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:12.175: INFO: Unable to read jessie_udp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:12.178: INFO: Unable to read jessie_tcp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:12.181: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:12.183: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:12.201: INFO: Lookups using dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c failed for: [wheezy_udp@dns-test-service.dns-5730.svc.cluster.local wheezy_tcp@dns-test-service.dns-5730.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local jessie_udp@dns-test-service.dns-5730.svc.cluster.local jessie_tcp@dns-test-service.dns-5730.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local] May 8 21:26:17.206: INFO: Unable to read wheezy_udp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:17.211: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:17.214: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:17.218: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:17.235: INFO: Unable to read jessie_udp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:17.237: INFO: Unable to read jessie_tcp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:17.240: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:17.243: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:17.263: INFO: Lookups using dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c failed for: [wheezy_udp@dns-test-service.dns-5730.svc.cluster.local wheezy_tcp@dns-test-service.dns-5730.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local jessie_udp@dns-test-service.dns-5730.svc.cluster.local jessie_tcp@dns-test-service.dns-5730.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local] May 8 21:26:22.206: INFO: Unable to read wheezy_udp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:22.212: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:22.215: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:22.217: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:22.235: INFO: Unable to read jessie_udp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:22.237: INFO: Unable to read jessie_tcp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:22.240: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:22.243: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:22.343: INFO: Lookups using dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c failed for: [wheezy_udp@dns-test-service.dns-5730.svc.cluster.local wheezy_tcp@dns-test-service.dns-5730.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local jessie_udp@dns-test-service.dns-5730.svc.cluster.local jessie_tcp@dns-test-service.dns-5730.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local] May 8 21:26:27.206: INFO: Unable to read wheezy_udp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:27.210: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:27.213: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:27.223: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:27.240: INFO: Unable to read jessie_udp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:27.242: INFO: Unable to read jessie_tcp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:27.244: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:27.246: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:27.258: INFO: Lookups using dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c failed for: [wheezy_udp@dns-test-service.dns-5730.svc.cluster.local wheezy_tcp@dns-test-service.dns-5730.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local jessie_udp@dns-test-service.dns-5730.svc.cluster.local jessie_tcp@dns-test-service.dns-5730.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local] May 8 21:26:32.206: INFO: Unable to read wheezy_udp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:32.210: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:32.214: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:32.217: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:32.238: INFO: Unable to read jessie_udp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:32.241: INFO: Unable to read jessie_tcp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:32.244: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:32.248: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:32.264: INFO: Lookups using dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c failed for: [wheezy_udp@dns-test-service.dns-5730.svc.cluster.local wheezy_tcp@dns-test-service.dns-5730.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local jessie_udp@dns-test-service.dns-5730.svc.cluster.local jessie_tcp@dns-test-service.dns-5730.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local] May 8 21:26:37.206: INFO: Unable to read wheezy_udp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:37.210: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:37.214: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:37.217: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:37.235: INFO: Unable to read jessie_udp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:37.237: INFO: Unable to read jessie_tcp@dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:37.240: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:37.243: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local from pod dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c: the server could not find the requested resource (get pods dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c) May 8 21:26:37.259: INFO: Lookups using dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c failed for: [wheezy_udp@dns-test-service.dns-5730.svc.cluster.local wheezy_tcp@dns-test-service.dns-5730.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local jessie_udp@dns-test-service.dns-5730.svc.cluster.local jessie_tcp@dns-test-service.dns-5730.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5730.svc.cluster.local] May 8 21:26:42.256: INFO: DNS probes using dns-5730/dns-test-b845a9ed-b0c1-4e9d-bcf0-7176b744f44c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:26:43.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5730" for this suite. • [SLOW TEST:37.482 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":34,"skipped":512,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:26:43.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:26:59.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-949" for this suite. • [SLOW TEST:16.419 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":35,"skipped":518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:26:59.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-3e183b14-0baa-4423-9cf5-65f6d1f94d10 STEP: Creating a pod to test consume configMaps May 8 21:26:59.752: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-79fb00b6-a4ba-4342-b461-f1e6da785998" in namespace "projected-7585" to be "success or failure" May 8 21:26:59.756: INFO: Pod "pod-projected-configmaps-79fb00b6-a4ba-4342-b461-f1e6da785998": Phase="Pending", Reason="", readiness=false. Elapsed: 3.196562ms May 8 21:27:01.774: INFO: Pod "pod-projected-configmaps-79fb00b6-a4ba-4342-b461-f1e6da785998": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022072256s May 8 21:27:03.804: INFO: Pod "pod-projected-configmaps-79fb00b6-a4ba-4342-b461-f1e6da785998": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051692684s STEP: Saw pod success May 8 21:27:03.804: INFO: Pod "pod-projected-configmaps-79fb00b6-a4ba-4342-b461-f1e6da785998" satisfied condition "success or failure" May 8 21:27:03.806: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-79fb00b6-a4ba-4342-b461-f1e6da785998 container projected-configmap-volume-test: STEP: delete the pod May 8 21:27:03.835: INFO: Waiting for pod pod-projected-configmaps-79fb00b6-a4ba-4342-b461-f1e6da785998 to disappear May 8 21:27:03.839: INFO: Pod pod-projected-configmaps-79fb00b6-a4ba-4342-b461-f1e6da785998 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:27:03.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7585" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":564,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:27:03.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:27:03.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 8 21:27:04.130: INFO: stderr: "" May 8 21:27:04.130: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:27:04.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8586" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":37,"skipped":571,"failed":0} SS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:27:04.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 8 21:27:04.280: INFO: Created pod &Pod{ObjectMeta:{dns-6794 dns-6794 /api/v1/namespaces/dns-6794/pods/dns-6794 4ba5d30c-b7d0-4e32-96fd-0c9d6d4b8526 14533279 0 2020-05-08 21:27:04 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x4fkm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x4fkm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x4fkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 8 21:27:08.299: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6794 PodName:dns-6794 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:27:08.299: INFO: >>> kubeConfig: /root/.kube/config I0508 21:27:08.332148 6 log.go:172] (0xc001982fd0) (0xc001aac140) Create stream I0508 21:27:08.332187 6 log.go:172] (0xc001982fd0) (0xc001aac140) Stream added, broadcasting: 1 I0508 21:27:08.334420 6 log.go:172] (0xc001982fd0) Reply frame received for 1 I0508 21:27:08.334455 6 log.go:172] (0xc001982fd0) (0xc001de08c0) Create stream I0508 21:27:08.334466 6 log.go:172] (0xc001982fd0) (0xc001de08c0) Stream added, broadcasting: 3 I0508 21:27:08.335393 6 log.go:172] (0xc001982fd0) Reply frame received for 3 I0508 21:27:08.335436 6 log.go:172] (0xc001982fd0) (0xc001d6c820) Create stream I0508 21:27:08.335451 6 log.go:172] (0xc001982fd0) (0xc001d6c820) Stream added, broadcasting: 5 I0508 21:27:08.336409 6 log.go:172] (0xc001982fd0) Reply frame received for 5 I0508 21:27:08.406298 6 log.go:172] (0xc001982fd0) Data frame received for 3 I0508 21:27:08.406339 6 log.go:172] (0xc001de08c0) (3) Data frame handling I0508 21:27:08.406370 6 log.go:172] (0xc001de08c0) (3) Data frame sent I0508 21:27:08.407364 6 log.go:172] (0xc001982fd0) Data frame received for 3 I0508 21:27:08.407412 6 log.go:172] (0xc001de08c0) (3) Data frame handling I0508 21:27:08.407441 6 log.go:172] (0xc001982fd0) Data frame received for 5 I0508 21:27:08.407458 6 log.go:172] (0xc001d6c820) (5) Data frame handling I0508 21:27:08.408766 6 log.go:172] (0xc001982fd0) Data frame received for 1 I0508 21:27:08.408785 6 log.go:172] (0xc001aac140) (1) Data frame handling I0508 21:27:08.408794 6 log.go:172] (0xc001aac140) (1) Data frame sent I0508 21:27:08.408804 6 log.go:172] (0xc001982fd0) (0xc001aac140) Stream removed, broadcasting: 1 I0508 21:27:08.409269 6 log.go:172] (0xc001982fd0) Go away received I0508 21:27:08.409415 6 log.go:172] (0xc001982fd0) (0xc001aac140) Stream removed, broadcasting: 1 I0508 21:27:08.409440 6 log.go:172] (0xc001982fd0) (0xc001de08c0) Stream removed, broadcasting: 3 I0508 21:27:08.409459 6 log.go:172] (0xc001982fd0) (0xc001d6c820) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 8 21:27:08.409: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6794 PodName:dns-6794 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:27:08.409: INFO: >>> kubeConfig: /root/.kube/config I0508 21:27:08.436843 6 log.go:172] (0xc001983600) (0xc001aac460) Create stream I0508 21:27:08.436866 6 log.go:172] (0xc001983600) (0xc001aac460) Stream added, broadcasting: 1 I0508 21:27:08.439170 6 log.go:172] (0xc001983600) Reply frame received for 1 I0508 21:27:08.439203 6 log.go:172] (0xc001983600) (0xc001ebeaa0) Create stream I0508 21:27:08.439214 6 log.go:172] (0xc001983600) (0xc001ebeaa0) Stream added, broadcasting: 3 I0508 21:27:08.440059 6 log.go:172] (0xc001983600) Reply frame received for 3 I0508 21:27:08.440106 6 log.go:172] (0xc001983600) (0xc001a160a0) Create stream I0508 21:27:08.440128 6 log.go:172] (0xc001983600) (0xc001a160a0) Stream added, broadcasting: 5 I0508 21:27:08.441083 6 log.go:172] (0xc001983600) Reply frame received for 5 I0508 21:27:08.516961 6 log.go:172] (0xc001983600) Data frame received for 3 I0508 21:27:08.516994 6 log.go:172] (0xc001ebeaa0) (3) Data frame handling I0508 21:27:08.517032 6 log.go:172] (0xc001ebeaa0) (3) Data frame sent I0508 21:27:08.517861 6 log.go:172] (0xc001983600) Data frame received for 5 I0508 21:27:08.517878 6 log.go:172] (0xc001a160a0) (5) Data frame handling I0508 21:27:08.517901 6 log.go:172] (0xc001983600) Data frame received for 3 I0508 21:27:08.517927 6 log.go:172] (0xc001ebeaa0) (3) Data frame handling I0508 21:27:08.519434 6 log.go:172] (0xc001983600) Data frame received for 1 I0508 21:27:08.519473 6 log.go:172] (0xc001aac460) (1) Data frame handling I0508 21:27:08.519531 6 log.go:172] (0xc001aac460) (1) Data frame sent I0508 21:27:08.519552 6 log.go:172] (0xc001983600) (0xc001aac460) Stream removed, broadcasting: 1 I0508 21:27:08.519570 6 log.go:172] (0xc001983600) Go away received I0508 21:27:08.519763 6 log.go:172] (0xc001983600) (0xc001aac460) Stream removed, broadcasting: 1 I0508 21:27:08.519806 6 log.go:172] (0xc001983600) (0xc001ebeaa0) Stream removed, broadcasting: 3 I0508 21:27:08.519831 6 log.go:172] (0xc001983600) (0xc001a160a0) Stream removed, broadcasting: 5 May 8 21:27:08.519: INFO: Deleting pod dns-6794... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:27:08.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6794" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":38,"skipped":573,"failed":0} ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:27:08.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 8 21:27:08.660: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:27:17.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9703" for this suite. • [SLOW TEST:8.528 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":39,"skipped":573,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:27:17.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-5ab59cc3-0cff-4588-ba8b-b18fb40e8f4a STEP: Creating a pod to test consume secrets May 8 21:27:17.230: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-851b5db8-c5d5-4381-aa00-da8b1c0fda00" in namespace "projected-960" to be "success or failure" May 8 21:27:17.234: INFO: Pod "pod-projected-secrets-851b5db8-c5d5-4381-aa00-da8b1c0fda00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046911ms May 8 21:27:19.348: INFO: Pod "pod-projected-secrets-851b5db8-c5d5-4381-aa00-da8b1c0fda00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118017804s May 8 21:27:21.353: INFO: Pod "pod-projected-secrets-851b5db8-c5d5-4381-aa00-da8b1c0fda00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122328627s STEP: Saw pod success May 8 21:27:21.353: INFO: Pod "pod-projected-secrets-851b5db8-c5d5-4381-aa00-da8b1c0fda00" satisfied condition "success or failure" May 8 21:27:21.356: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-851b5db8-c5d5-4381-aa00-da8b1c0fda00 container projected-secret-volume-test: STEP: delete the pod May 8 21:27:21.378: INFO: Waiting for pod pod-projected-secrets-851b5db8-c5d5-4381-aa00-da8b1c0fda00 to disappear May 8 21:27:21.381: INFO: Pod pod-projected-secrets-851b5db8-c5d5-4381-aa00-da8b1c0fda00 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:27:21.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-960" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":576,"failed":0} SSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:27:21.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-46ec119d-6e23-4d8c-aac0-3fc0bffd0801 in namespace container-probe-5568 May 8 21:27:25.490: INFO: Started pod liveness-46ec119d-6e23-4d8c-aac0-3fc0bffd0801 in namespace container-probe-5568 STEP: checking the pod's current state and verifying that restartCount is present May 8 21:27:25.492: INFO: Initial restart count of pod liveness-46ec119d-6e23-4d8c-aac0-3fc0bffd0801 is 0 May 8 21:27:41.533: INFO: Restart count of pod container-probe-5568/liveness-46ec119d-6e23-4d8c-aac0-3fc0bffd0801 is now 1 (16.040241059s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:27:41.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5568" for this suite. • [SLOW TEST:20.197 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:27:41.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 21:27:41.831: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42d6d286-6bcd-4d89-8cd9-234f43ed2bce" in namespace "downward-api-1866" to be "success or failure" May 8 21:27:41.894: INFO: Pod "downwardapi-volume-42d6d286-6bcd-4d89-8cd9-234f43ed2bce": Phase="Pending", Reason="", readiness=false. Elapsed: 63.027307ms May 8 21:27:43.899: INFO: Pod "downwardapi-volume-42d6d286-6bcd-4d89-8cd9-234f43ed2bce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068050685s May 8 21:27:45.903: INFO: Pod "downwardapi-volume-42d6d286-6bcd-4d89-8cd9-234f43ed2bce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072043196s May 8 21:27:47.908: INFO: Pod "downwardapi-volume-42d6d286-6bcd-4d89-8cd9-234f43ed2bce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.076424689s STEP: Saw pod success May 8 21:27:47.908: INFO: Pod "downwardapi-volume-42d6d286-6bcd-4d89-8cd9-234f43ed2bce" satisfied condition "success or failure" May 8 21:27:47.911: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-42d6d286-6bcd-4d89-8cd9-234f43ed2bce container client-container: STEP: delete the pod May 8 21:27:47.931: INFO: Waiting for pod downwardapi-volume-42d6d286-6bcd-4d89-8cd9-234f43ed2bce to disappear May 8 21:27:47.936: INFO: Pod downwardapi-volume-42d6d286-6bcd-4d89-8cd9-234f43ed2bce no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:27:47.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1866" for this suite. • [SLOW TEST:6.360 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":607,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:27:47.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 21:27:48.763: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 21:27:50.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570068, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570068, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570068, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570068, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 21:27:53.812: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:27:53.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8171" for this suite. STEP: Destroying namespace "webhook-8171-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.236 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":43,"skipped":609,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:27:54.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:28:10.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8620" for this suite. • [SLOW TEST:16.285 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":44,"skipped":610,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:28:10.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 8 21:28:10.578: INFO: Waiting up to 5m0s for pod "pod-37c34188-0afa-4f72-a32d-7bca4ff8838d" in namespace "emptydir-7650" to be "success or failure" May 8 21:28:10.583: INFO: Pod "pod-37c34188-0afa-4f72-a32d-7bca4ff8838d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.591614ms May 8 21:28:12.587: INFO: Pod "pod-37c34188-0afa-4f72-a32d-7bca4ff8838d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009233071s May 8 21:28:14.660: INFO: Pod "pod-37c34188-0afa-4f72-a32d-7bca4ff8838d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08234899s STEP: Saw pod success May 8 21:28:14.660: INFO: Pod "pod-37c34188-0afa-4f72-a32d-7bca4ff8838d" satisfied condition "success or failure" May 8 21:28:14.663: INFO: Trying to get logs from node jerma-worker2 pod pod-37c34188-0afa-4f72-a32d-7bca4ff8838d container test-container: STEP: delete the pod May 8 21:28:14.742: INFO: Waiting for pod pod-37c34188-0afa-4f72-a32d-7bca4ff8838d to disappear May 8 21:28:14.746: INFO: Pod pod-37c34188-0afa-4f72-a32d-7bca4ff8838d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:28:14.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7650" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":636,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:28:14.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 8 21:28:14.839: INFO: Waiting up to 5m0s for pod "client-containers-ca3e736a-5236-4620-b15c-8a76f2680923" in namespace "containers-5678" to be "success or failure" May 8 21:28:14.843: INFO: Pod "client-containers-ca3e736a-5236-4620-b15c-8a76f2680923": Phase="Pending", Reason="", readiness=false. Elapsed: 3.386892ms May 8 21:28:16.846: INFO: Pod "client-containers-ca3e736a-5236-4620-b15c-8a76f2680923": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006936644s May 8 21:28:18.849: INFO: Pod "client-containers-ca3e736a-5236-4620-b15c-8a76f2680923": Phase="Running", Reason="", readiness=true. Elapsed: 4.010257662s May 8 21:28:20.854: INFO: Pod "client-containers-ca3e736a-5236-4620-b15c-8a76f2680923": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014871738s STEP: Saw pod success May 8 21:28:20.854: INFO: Pod "client-containers-ca3e736a-5236-4620-b15c-8a76f2680923" satisfied condition "success or failure" May 8 21:28:20.857: INFO: Trying to get logs from node jerma-worker pod client-containers-ca3e736a-5236-4620-b15c-8a76f2680923 container test-container: STEP: delete the pod May 8 21:28:20.874: INFO: Waiting for pod client-containers-ca3e736a-5236-4620-b15c-8a76f2680923 to disappear May 8 21:28:20.935: INFO: Pod client-containers-ca3e736a-5236-4620-b15c-8a76f2680923 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:28:20.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5678" for this suite. • [SLOW TEST:6.189 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":660,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:28:20.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 21:28:21.563: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 21:28:23.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570101, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570101, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570101, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570101, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 21:28:26.726: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:28:27.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4314" for this suite. STEP: Destroying namespace "webhook-4314-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.517 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":47,"skipped":666,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:28:27.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:28:27.553: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 8 21:28:30.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-808 create -f -' May 8 21:28:33.802: INFO: stderr: "" May 8 21:28:33.802: INFO: stdout: "e2e-test-crd-publish-openapi-7333-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 8 21:28:33.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-808 delete e2e-test-crd-publish-openapi-7333-crds test-foo' May 8 21:28:33.919: INFO: stderr: "" May 8 21:28:33.919: INFO: stdout: "e2e-test-crd-publish-openapi-7333-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 8 21:28:33.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-808 apply -f -' May 8 21:28:34.158: INFO: stderr: "" May 8 21:28:34.158: INFO: stdout: "e2e-test-crd-publish-openapi-7333-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 8 21:28:34.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-808 delete e2e-test-crd-publish-openapi-7333-crds test-foo' May 8 21:28:34.266: INFO: stderr: "" May 8 21:28:34.266: INFO: stdout: "e2e-test-crd-publish-openapi-7333-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 8 21:28:34.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-808 create -f -' May 8 21:28:34.507: INFO: rc: 1 May 8 21:28:34.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-808 apply -f -' May 8 21:28:34.738: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 8 21:28:34.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-808 create -f -' May 8 21:28:34.988: INFO: rc: 1 May 8 21:28:34.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-808 apply -f -' May 8 21:28:35.207: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 8 21:28:35.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7333-crds' May 8 21:28:35.455: INFO: stderr: "" May 8 21:28:35.455: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7333-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 8 21:28:35.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7333-crds.metadata' May 8 21:28:35.765: INFO: stderr: "" May 8 21:28:35.765: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7333-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 8 21:28:35.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7333-crds.spec' May 8 21:28:36.048: INFO: stderr: "" May 8 21:28:36.048: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7333-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 8 21:28:36.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7333-crds.spec.bars' May 8 21:28:36.309: INFO: stderr: "" May 8 21:28:36.309: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7333-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 8 21:28:36.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7333-crds.spec.bars2' May 8 21:28:36.545: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:28:39.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-808" for this suite. • [SLOW TEST:11.965 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":48,"skipped":674,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:28:39.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 8 21:28:52.163: INFO: 10 pods remaining May 8 21:28:52.163: INFO: 10 pods has nil DeletionTimestamp May 8 21:28:52.163: INFO: May 8 21:28:57.162: INFO: 10 pods remaining May 8 21:28:57.162: INFO: 10 pods has nil DeletionTimestamp May 8 21:28:57.163: INFO: May 8 21:29:02.163: INFO: 10 pods remaining May 8 21:29:02.163: INFO: 10 pods has nil DeletionTimestamp May 8 21:29:02.163: INFO: STEP: Gathering metrics W0508 21:29:07.165090 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 21:29:07.165: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:29:07.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4800" for this suite. • [SLOW TEST:27.745 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":49,"skipped":697,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:29:07.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:29:07.444: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 8 21:29:07.486: INFO: Pod name sample-pod: Found 0 pods out of 1 May 8 21:29:12.490: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 8 21:29:12.490: INFO: Creating deployment "test-rolling-update-deployment" May 8 21:29:12.544: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 8 21:29:12.784: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 8 21:29:14.900: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 8 21:29:14.969: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570152, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570152, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570152, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570152, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 21:29:16.973: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 8 21:29:16.981: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7691 /apis/apps/v1/namespaces/deployment-7691/deployments/test-rolling-update-deployment 8a5cf857-ee53-4c72-90b3-625eab132e41 14534367 1 2020-05-08 21:29:12 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0047b8218 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-08 21:29:12 +0000 UTC,LastTransitionTime:2020-05-08 21:29:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-08 21:29:16 +0000 UTC,LastTransitionTime:2020-05-08 21:29:12 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 8 21:29:16.983: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-7691 /apis/apps/v1/namespaces/deployment-7691/replicasets/test-rolling-update-deployment-67cf4f6444 3de620aa-05f6-4132-906a-be7f3e5c1f6b 14534356 1 2020-05-08 21:29:12 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 8a5cf857-ee53-4c72-90b3-625eab132e41 0xc0047e73a7 0xc0047e73a8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0047e7428 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 8 21:29:16.983: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 8 21:29:16.983: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7691 /apis/apps/v1/namespaces/deployment-7691/replicasets/test-rolling-update-controller e0cd8c17-4db2-43b8-8f7b-8e2306f1c722 14534365 2 2020-05-08 21:29:07 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 8a5cf857-ee53-4c72-90b3-625eab132e41 0xc0047e72a7 0xc0047e72a8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0047e7318 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 8 21:29:16.986: INFO: Pod "test-rolling-update-deployment-67cf4f6444-8psfh" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-8psfh test-rolling-update-deployment-67cf4f6444- deployment-7691 /api/v1/namespaces/deployment-7691/pods/test-rolling-update-deployment-67cf4f6444-8psfh 7d51fd70-37b7-4d77-b190-33d2a0505938 14534355 0 2020-05-08 21:29:12 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 3de620aa-05f6-4132-906a-be7f3e5c1f6b 0xc0047b8757 0xc0047b8758}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vvbfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vvbfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vvbfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 21:29:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 21:29:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 21:29:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 21:29:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.171,StartTime:2020-05-08 21:29:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 21:29:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://59a192fc98a2ff79ddeb66c1088aeb4eedda079fe0bd85e449c3ad4ed1c6fa7f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.171,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:29:16.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7691" for this suite. • [SLOW TEST:9.821 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":50,"skipped":817,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:29:16.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:30:17.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9133" for this suite. • [SLOW TEST:60.204 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":825,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:30:17.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 8 21:30:17.304: INFO: Waiting up to 5m0s for pod "downward-api-bffb2e36-1bc5-4269-a673-3686d907b1ad" in namespace "downward-api-7138" to be "success or failure" May 8 21:30:17.315: INFO: Pod "downward-api-bffb2e36-1bc5-4269-a673-3686d907b1ad": Phase="Pending", Reason="", readiness=false. Elapsed: 10.302993ms May 8 21:30:19.381: INFO: Pod "downward-api-bffb2e36-1bc5-4269-a673-3686d907b1ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076600837s May 8 21:30:21.386: INFO: Pod "downward-api-bffb2e36-1bc5-4269-a673-3686d907b1ad": Phase="Running", Reason="", readiness=true. Elapsed: 4.081251065s May 8 21:30:23.390: INFO: Pod "downward-api-bffb2e36-1bc5-4269-a673-3686d907b1ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085451293s STEP: Saw pod success May 8 21:30:23.390: INFO: Pod "downward-api-bffb2e36-1bc5-4269-a673-3686d907b1ad" satisfied condition "success or failure" May 8 21:30:23.393: INFO: Trying to get logs from node jerma-worker pod downward-api-bffb2e36-1bc5-4269-a673-3686d907b1ad container dapi-container: STEP: delete the pod May 8 21:30:23.430: INFO: Waiting for pod downward-api-bffb2e36-1bc5-4269-a673-3686d907b1ad to disappear May 8 21:30:23.464: INFO: Pod downward-api-bffb2e36-1bc5-4269-a673-3686d907b1ad no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:30:23.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7138" for this suite. • [SLOW TEST:6.280 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":830,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:30:23.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:30:23.548: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.745668ms) May 8 21:30:23.551: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.087713ms) May 8 21:30:23.555: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.57469ms) May 8 21:30:23.557: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.910166ms) May 8 21:30:23.560: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.763077ms) May 8 21:30:23.564: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.734619ms) May 8 21:30:23.602: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 38.156566ms) May 8 21:30:23.606: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.739597ms) May 8 21:30:23.609: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.25003ms) May 8 21:30:23.613: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.840202ms) May 8 21:30:23.617: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.298957ms) May 8 21:30:23.621: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.984439ms) May 8 21:30:23.624: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.5861ms) May 8 21:30:23.628: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.616811ms) May 8 21:30:23.631: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.524193ms) May 8 21:30:23.635: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.465981ms) May 8 21:30:23.638: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.534578ms) May 8 21:30:23.642: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.00564ms) May 8 21:30:23.644: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.327617ms) May 8 21:30:23.647: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.581144ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:30:23.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1448" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":53,"skipped":840,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:30:23.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-508241fa-0be2-46e6-89eb-da813415de8c STEP: Creating a pod to test consume configMaps May 8 21:30:23.760: INFO: Waiting up to 5m0s for pod "pod-configmaps-cc827d4f-4d01-4292-9531-2b5cfa9ee552" in namespace "configmap-621" to be "success or failure" May 8 21:30:23.770: INFO: Pod "pod-configmaps-cc827d4f-4d01-4292-9531-2b5cfa9ee552": Phase="Pending", Reason="", readiness=false. Elapsed: 9.210115ms May 8 21:30:25.774: INFO: Pod "pod-configmaps-cc827d4f-4d01-4292-9531-2b5cfa9ee552": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01339702s May 8 21:30:27.778: INFO: Pod "pod-configmaps-cc827d4f-4d01-4292-9531-2b5cfa9ee552": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017316369s STEP: Saw pod success May 8 21:30:27.778: INFO: Pod "pod-configmaps-cc827d4f-4d01-4292-9531-2b5cfa9ee552" satisfied condition "success or failure" May 8 21:30:27.780: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-cc827d4f-4d01-4292-9531-2b5cfa9ee552 container configmap-volume-test: STEP: delete the pod May 8 21:30:27.834: INFO: Waiting for pod pod-configmaps-cc827d4f-4d01-4292-9531-2b5cfa9ee552 to disappear May 8 21:30:27.869: INFO: Pod pod-configmaps-cc827d4f-4d01-4292-9531-2b5cfa9ee552 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:30:27.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-621" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":854,"failed":0} SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:30:27.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-fa45d138-a607-46e4-9775-5fabe1bbd63f STEP: Creating secret with name s-test-opt-upd-7b5a27aa-30fe-41c5-838f-999d2e5b1dff STEP: Creating the pod STEP: Deleting secret s-test-opt-del-fa45d138-a607-46e4-9775-5fabe1bbd63f STEP: Updating secret s-test-opt-upd-7b5a27aa-30fe-41c5-838f-999d2e5b1dff STEP: Creating secret with name s-test-opt-create-8e1cbf5c-4829-4a75-980a-3bb5ca54c6f5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:32:00.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8770" for this suite. • [SLOW TEST:92.595 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":856,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:32:00.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 8 21:32:00.519: INFO: >>> kubeConfig: /root/.kube/config May 8 21:32:02.439: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:32:13.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5061" for this suite. • [SLOW TEST:13.437 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":56,"skipped":906,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:32:13.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 8 21:32:22.088: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 21:32:22.139: INFO: Pod pod-with-prestop-http-hook still exists May 8 21:32:24.139: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 21:32:24.173: INFO: Pod pod-with-prestop-http-hook still exists May 8 21:32:26.139: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 21:32:26.143: INFO: Pod pod-with-prestop-http-hook still exists May 8 21:32:28.139: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 21:32:28.144: INFO: Pod pod-with-prestop-http-hook still exists May 8 21:32:30.139: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 21:32:30.144: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:32:30.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-744" for this suite. • [SLOW TEST:16.267 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":915,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:32:30.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-0fd74107-e182-4771-9963-8d51011300fd STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:32:34.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7565" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":928,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:32:34.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 21:32:34.449: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4efbbb5f-b25b-41fa-8f7b-6bc0caf319f0" in namespace "projected-1013" to be "success or failure" May 8 21:32:34.470: INFO: Pod "downwardapi-volume-4efbbb5f-b25b-41fa-8f7b-6bc0caf319f0": Phase="Pending", Reason="", readiness=false. Elapsed: 21.032351ms May 8 21:32:36.483: INFO: Pod "downwardapi-volume-4efbbb5f-b25b-41fa-8f7b-6bc0caf319f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033347467s May 8 21:32:38.487: INFO: Pod "downwardapi-volume-4efbbb5f-b25b-41fa-8f7b-6bc0caf319f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037249165s STEP: Saw pod success May 8 21:32:38.487: INFO: Pod "downwardapi-volume-4efbbb5f-b25b-41fa-8f7b-6bc0caf319f0" satisfied condition "success or failure" May 8 21:32:38.508: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4efbbb5f-b25b-41fa-8f7b-6bc0caf319f0 container client-container: STEP: delete the pod May 8 21:32:38.532: INFO: Waiting for pod downwardapi-volume-4efbbb5f-b25b-41fa-8f7b-6bc0caf319f0 to disappear May 8 21:32:38.536: INFO: Pod downwardapi-volume-4efbbb5f-b25b-41fa-8f7b-6bc0caf319f0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:32:38.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1013" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":937,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:32:38.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 8 21:32:39.203: INFO: created pod pod-service-account-defaultsa May 8 21:32:39.203: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 8 21:32:39.239: INFO: created pod pod-service-account-mountsa May 8 21:32:39.239: INFO: pod pod-service-account-mountsa service account token volume mount: true May 8 21:32:39.243: INFO: created pod pod-service-account-nomountsa May 8 21:32:39.243: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 8 21:32:39.308: INFO: created pod pod-service-account-defaultsa-mountspec May 8 21:32:39.308: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 8 21:32:39.335: INFO: created pod pod-service-account-mountsa-mountspec May 8 21:32:39.335: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 8 21:32:39.376: INFO: created pod pod-service-account-nomountsa-mountspec May 8 21:32:39.377: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 8 21:32:39.384: INFO: created pod pod-service-account-defaultsa-nomountspec May 8 21:32:39.384: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 8 21:32:39.412: INFO: created pod pod-service-account-mountsa-nomountspec May 8 21:32:39.412: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 8 21:32:39.426: INFO: created pod pod-service-account-nomountsa-nomountspec May 8 21:32:39.426: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:32:39.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4223" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":60,"skipped":950,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:32:39.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:33:08.102: INFO: Container started at 2020-05-08 21:32:49 +0000 UTC, pod became ready at 2020-05-08 21:33:06 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:33:08.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5946" for this suite. • [SLOW TEST:28.545 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":969,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:33:08.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-51af2316-9fd7-4363-8b38-515235565edf in namespace container-probe-9684 May 8 21:33:12.263: INFO: Started pod liveness-51af2316-9fd7-4363-8b38-515235565edf in namespace container-probe-9684 STEP: checking the pod's current state and verifying that restartCount is present May 8 21:33:12.266: INFO: Initial restart count of pod liveness-51af2316-9fd7-4363-8b38-515235565edf is 0 May 8 21:33:30.309: INFO: Restart count of pod container-probe-9684/liveness-51af2316-9fd7-4363-8b38-515235565edf is now 1 (18.042615061s elapsed) May 8 21:33:50.351: INFO: Restart count of pod container-probe-9684/liveness-51af2316-9fd7-4363-8b38-515235565edf is now 2 (38.084419345s elapsed) May 8 21:34:10.392: INFO: Restart count of pod container-probe-9684/liveness-51af2316-9fd7-4363-8b38-515235565edf is now 3 (58.125496736s elapsed) May 8 21:34:30.592: INFO: Restart count of pod container-probe-9684/liveness-51af2316-9fd7-4363-8b38-515235565edf is now 4 (1m18.325481394s elapsed) May 8 21:35:32.739: INFO: Restart count of pod container-probe-9684/liveness-51af2316-9fd7-4363-8b38-515235565edf is now 5 (2m20.472606982s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:35:32.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9684" for this suite. • [SLOW TEST:144.682 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":976,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:35:32.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:35:32.854: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 8 21:35:35.065: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:35:36.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8959" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":63,"skipped":995,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:35:36.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:35:47.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2137" for this suite. • [SLOW TEST:11.350 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":64,"skipped":1008,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:35:47.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:35:51.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4942" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":65,"skipped":1021,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:35:51.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 8 21:35:56.258: INFO: &Pod{ObjectMeta:{send-events-86df6b38-a666-40f5-9950-a2c2e4144333 events-9047 /api/v1/namespaces/events-9047/pods/send-events-86df6b38-a666-40f5-9950-a2c2e4144333 73b2af12-c027-43ef-835f-368bce9b4ec1 14536093 0 2020-05-08 21:35:52 +0000 UTC map[name:foo time:829306253] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k45t6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k45t6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k45t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 21:35:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 21:35:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 21:35:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 21:35:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.32,StartTime:2020-05-08 21:35:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 21:35:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://36517ef6db9dbe2c7017496894081cafeb2e14a471ee3ebeb76c94549e13edd8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.32,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 8 21:35:58.263: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 8 21:36:00.268: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:36:00.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9047" for this suite. • [SLOW TEST:8.568 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":66,"skipped":1047,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:36:00.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 21:36:00.883: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 21:36:02.993: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570560, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570560, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570561, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570560, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 21:36:06.035: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:36:06.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:36:07.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9962" for this suite. STEP: Destroying namespace "webhook-9962-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.994 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":67,"skipped":1059,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:36:07.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-a3d864ea-2a66-4f26-8a47-23e7053e774f [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:36:07.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2261" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":68,"skipped":1069,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:36:07.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 8 21:36:07.458: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix008018771/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:36:07.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7113" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":69,"skipped":1074,"failed":0} SS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:36:07.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 8 21:36:12.226: INFO: Successfully updated pod "pod-update-e7ecaae4-71f2-4101-b326-d698780277ea" STEP: verifying the updated pod is in kubernetes May 8 21:36:12.239: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:36:12.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-359" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1076,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:36:12.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 21:36:14.289: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 21:36:16.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570574, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570574, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570574, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570574, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 21:36:18.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570574, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570574, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570574, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570574, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 21:36:21.391: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:36:33.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8220" for this suite. STEP: Destroying namespace "webhook-8220-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.488 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":71,"skipped":1076,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:36:33.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-2041 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2041 to expose endpoints map[] May 8 21:36:33.865: INFO: Get endpoints failed (38.355069ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 8 21:36:34.868: INFO: successfully validated that service endpoint-test2 in namespace services-2041 exposes endpoints map[] (1.041192313s elapsed) STEP: Creating pod pod1 in namespace services-2041 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2041 to expose endpoints map[pod1:[80]] May 8 21:36:38.937: INFO: successfully validated that service endpoint-test2 in namespace services-2041 exposes endpoints map[pod1:[80]] (4.064092387s elapsed) STEP: Creating pod pod2 in namespace services-2041 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2041 to expose endpoints map[pod1:[80] pod2:[80]] May 8 21:36:43.099: INFO: successfully validated that service endpoint-test2 in namespace services-2041 exposes endpoints map[pod1:[80] pod2:[80]] (4.159604989s elapsed) STEP: Deleting pod pod1 in namespace services-2041 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2041 to expose endpoints map[pod2:[80]] May 8 21:36:44.164: INFO: successfully validated that service endpoint-test2 in namespace services-2041 exposes endpoints map[pod2:[80]] (1.060191003s elapsed) STEP: Deleting pod pod2 in namespace services-2041 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2041 to expose endpoints map[] May 8 21:36:45.211: INFO: successfully validated that service endpoint-test2 in namespace services-2041 exposes endpoints map[] (1.042801176s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:36:45.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2041" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.691 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":72,"skipped":1086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:36:45.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 8 21:36:45.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5789' May 8 21:36:45.633: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 8 21:36:45.633: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 8 21:36:45.686: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 8 21:36:45.721: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 8 21:36:45.736: INFO: scanned /root for discovery docs: May 8 21:36:45.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5789' May 8 21:37:01.692: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 8 21:37:01.692: INFO: stdout: "Created e2e-test-httpd-rc-f08b5a6311a095b721b2a30477f7b36a\nScaling up e2e-test-httpd-rc-f08b5a6311a095b721b2a30477f7b36a from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-f08b5a6311a095b721b2a30477f7b36a up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-f08b5a6311a095b721b2a30477f7b36a to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 8 21:37:01.692: INFO: stdout: "Created e2e-test-httpd-rc-f08b5a6311a095b721b2a30477f7b36a\nScaling up e2e-test-httpd-rc-f08b5a6311a095b721b2a30477f7b36a from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-f08b5a6311a095b721b2a30477f7b36a up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-f08b5a6311a095b721b2a30477f7b36a to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 8 21:37:01.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5789' May 8 21:37:01.791: INFO: stderr: "" May 8 21:37:01.791: INFO: stdout: "e2e-test-httpd-rc-f08b5a6311a095b721b2a30477f7b36a-b8tz9 " May 8 21:37:01.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-f08b5a6311a095b721b2a30477f7b36a-b8tz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5789' May 8 21:37:01.890: INFO: stderr: "" May 8 21:37:01.890: INFO: stdout: "true" May 8 21:37:01.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-f08b5a6311a095b721b2a30477f7b36a-b8tz9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5789' May 8 21:37:01.977: INFO: stderr: "" May 8 21:37:01.977: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 8 21:37:01.977: INFO: e2e-test-httpd-rc-f08b5a6311a095b721b2a30477f7b36a-b8tz9 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 8 21:37:01.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5789' May 8 21:37:02.067: INFO: stderr: "" May 8 21:37:02.068: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:37:02.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5789" for this suite. • [SLOW TEST:16.674 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":73,"skipped":1143,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:37:02.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 21:37:02.216: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3cf9f808-2f31-4f6f-9fef-cf8d5eaef3f6" in namespace "downward-api-209" to be "success or failure" May 8 21:37:02.219: INFO: Pod "downwardapi-volume-3cf9f808-2f31-4f6f-9fef-cf8d5eaef3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.006213ms May 8 21:37:04.223: INFO: Pod "downwardapi-volume-3cf9f808-2f31-4f6f-9fef-cf8d5eaef3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007312889s May 8 21:37:06.228: INFO: Pod "downwardapi-volume-3cf9f808-2f31-4f6f-9fef-cf8d5eaef3f6": Phase="Running", Reason="", readiness=true. Elapsed: 4.01233181s May 8 21:37:08.233: INFO: Pod "downwardapi-volume-3cf9f808-2f31-4f6f-9fef-cf8d5eaef3f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016978191s STEP: Saw pod success May 8 21:37:08.233: INFO: Pod "downwardapi-volume-3cf9f808-2f31-4f6f-9fef-cf8d5eaef3f6" satisfied condition "success or failure" May 8 21:37:08.236: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3cf9f808-2f31-4f6f-9fef-cf8d5eaef3f6 container client-container: STEP: delete the pod May 8 21:37:08.291: INFO: Waiting for pod downwardapi-volume-3cf9f808-2f31-4f6f-9fef-cf8d5eaef3f6 to disappear May 8 21:37:08.304: INFO: Pod downwardapi-volume-3cf9f808-2f31-4f6f-9fef-cf8d5eaef3f6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:37:08.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-209" for this suite. • [SLOW TEST:6.210 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1151,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:37:08.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-313955ac-2254-4cb9-a66f-06ad011467eb STEP: Creating the pod STEP: Updating configmap configmap-test-upd-313955ac-2254-4cb9-a66f-06ad011467eb STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:37:16.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-485" for this suite. • [SLOW TEST:8.189 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1167,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:37:16.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 21:37:17.250: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 21:37:19.302: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570637, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570637, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570637, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570637, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 21:37:22.352: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:37:22.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5781" for this suite. STEP: Destroying namespace "webhook-5781-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.299 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":76,"skipped":1180,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:37:22.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:37:23.218: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-4f1d4e9b-7734-4f50-97c0-de1b22109c53" in namespace "security-context-test-5521" to be "success or failure" May 8 21:37:23.232: INFO: Pod "busybox-readonly-false-4f1d4e9b-7734-4f50-97c0-de1b22109c53": Phase="Pending", Reason="", readiness=false. Elapsed: 14.774487ms May 8 21:37:25.236: INFO: Pod "busybox-readonly-false-4f1d4e9b-7734-4f50-97c0-de1b22109c53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018486982s May 8 21:37:27.240: INFO: Pod "busybox-readonly-false-4f1d4e9b-7734-4f50-97c0-de1b22109c53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022360973s May 8 21:37:27.240: INFO: Pod "busybox-readonly-false-4f1d4e9b-7734-4f50-97c0-de1b22109c53" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:37:27.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5521" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1188,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:37:27.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 8 21:37:27.319: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:37:43.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-981" for this suite. • [SLOW TEST:16.668 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":78,"skipped":1212,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:37:43.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-69f977c7-b13c-4246-bb5a-6291a75d5f86 STEP: Creating a pod to test consume secrets May 8 21:37:44.096: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-af65ee3d-405c-4b6d-8d76-28926afd5782" in namespace "projected-1218" to be "success or failure" May 8 21:37:44.101: INFO: Pod "pod-projected-secrets-af65ee3d-405c-4b6d-8d76-28926afd5782": Phase="Pending", Reason="", readiness=false. Elapsed: 4.993151ms May 8 21:37:46.132: INFO: Pod "pod-projected-secrets-af65ee3d-405c-4b6d-8d76-28926afd5782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035467249s May 8 21:37:48.135: INFO: Pod "pod-projected-secrets-af65ee3d-405c-4b6d-8d76-28926afd5782": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039201523s STEP: Saw pod success May 8 21:37:48.135: INFO: Pod "pod-projected-secrets-af65ee3d-405c-4b6d-8d76-28926afd5782" satisfied condition "success or failure" May 8 21:37:48.138: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-af65ee3d-405c-4b6d-8d76-28926afd5782 container projected-secret-volume-test: STEP: delete the pod May 8 21:37:48.175: INFO: Waiting for pod pod-projected-secrets-af65ee3d-405c-4b6d-8d76-28926afd5782 to disappear May 8 21:37:48.237: INFO: Pod pod-projected-secrets-af65ee3d-405c-4b6d-8d76-28926afd5782 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:37:48.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1218" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1215,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:37:48.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-47e5bd93-a9c5-4c1a-b777-037e26171e3a STEP: Creating a pod to test consume configMaps May 8 21:37:48.319: INFO: Waiting up to 5m0s for pod "pod-configmaps-7fe51d35-cec5-4321-af71-c3f4711ffa7c" in namespace "configmap-2898" to be "success or failure" May 8 21:37:48.323: INFO: Pod "pod-configmaps-7fe51d35-cec5-4321-af71-c3f4711ffa7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048633ms May 8 21:37:50.327: INFO: Pod "pod-configmaps-7fe51d35-cec5-4321-af71-c3f4711ffa7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00821654s May 8 21:37:52.331: INFO: Pod "pod-configmaps-7fe51d35-cec5-4321-af71-c3f4711ffa7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012246276s STEP: Saw pod success May 8 21:37:52.331: INFO: Pod "pod-configmaps-7fe51d35-cec5-4321-af71-c3f4711ffa7c" satisfied condition "success or failure" May 8 21:37:52.334: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-7fe51d35-cec5-4321-af71-c3f4711ffa7c container configmap-volume-test: STEP: delete the pod May 8 21:37:52.374: INFO: Waiting for pod pod-configmaps-7fe51d35-cec5-4321-af71-c3f4711ffa7c to disappear May 8 21:37:52.404: INFO: Pod pod-configmaps-7fe51d35-cec5-4321-af71-c3f4711ffa7c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:37:52.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2898" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1216,"failed":0} ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:37:52.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4090 STEP: creating a selector STEP: Creating the service pods in kubernetes May 8 21:37:52.480: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 8 21:38:22.694: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.39:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4090 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:38:22.694: INFO: >>> kubeConfig: /root/.kube/config I0508 21:38:22.730912 6 log.go:172] (0xc0010bc580) (0xc001de0aa0) Create stream I0508 21:38:22.730940 6 log.go:172] (0xc0010bc580) (0xc001de0aa0) Stream added, broadcasting: 1 I0508 21:38:22.733266 6 log.go:172] (0xc0010bc580) Reply frame received for 1 I0508 21:38:22.733328 6 log.go:172] (0xc0010bc580) (0xc0023d8fa0) Create stream I0508 21:38:22.733349 6 log.go:172] (0xc0010bc580) (0xc0023d8fa0) Stream added, broadcasting: 3 I0508 21:38:22.734246 6 log.go:172] (0xc0010bc580) Reply frame received for 3 I0508 21:38:22.734314 6 log.go:172] (0xc0010bc580) (0xc001de0c80) Create stream I0508 21:38:22.734344 6 log.go:172] (0xc0010bc580) (0xc001de0c80) Stream added, broadcasting: 5 I0508 21:38:22.735322 6 log.go:172] (0xc0010bc580) Reply frame received for 5 I0508 21:38:22.830020 6 log.go:172] (0xc0010bc580) Data frame received for 5 I0508 21:38:22.830125 6 log.go:172] (0xc001de0c80) (5) Data frame handling I0508 21:38:22.830166 6 log.go:172] (0xc0010bc580) Data frame received for 3 I0508 21:38:22.830197 6 log.go:172] (0xc0023d8fa0) (3) Data frame handling I0508 21:38:22.830221 6 log.go:172] (0xc0023d8fa0) (3) Data frame sent I0508 21:38:22.830242 6 log.go:172] (0xc0010bc580) Data frame received for 3 I0508 21:38:22.830259 6 log.go:172] (0xc0023d8fa0) (3) Data frame handling I0508 21:38:22.832384 6 log.go:172] (0xc0010bc580) Data frame received for 1 I0508 21:38:22.832407 6 log.go:172] (0xc001de0aa0) (1) Data frame handling I0508 21:38:22.832431 6 log.go:172] (0xc001de0aa0) (1) Data frame sent I0508 21:38:22.832447 6 log.go:172] (0xc0010bc580) (0xc001de0aa0) Stream removed, broadcasting: 1 I0508 21:38:22.832532 6 log.go:172] (0xc0010bc580) (0xc001de0aa0) Stream removed, broadcasting: 1 I0508 21:38:22.832555 6 log.go:172] (0xc0010bc580) (0xc0023d8fa0) Stream removed, broadcasting: 3 I0508 21:38:22.832645 6 log.go:172] (0xc0010bc580) Go away received I0508 21:38:22.832769 6 log.go:172] (0xc0010bc580) (0xc001de0c80) Stream removed, broadcasting: 5 May 8 21:38:22.832: INFO: Found all expected endpoints: [netserver-0] May 8 21:38:22.836: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.189:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4090 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:38:22.836: INFO: >>> kubeConfig: /root/.kube/config I0508 21:38:22.866482 6 log.go:172] (0xc000ea6a50) (0xc0029097c0) Create stream I0508 21:38:22.866519 6 log.go:172] (0xc000ea6a50) (0xc0029097c0) Stream added, broadcasting: 1 I0508 21:38:22.868656 6 log.go:172] (0xc000ea6a50) Reply frame received for 1 I0508 21:38:22.868705 6 log.go:172] (0xc000ea6a50) (0xc002909860) Create stream I0508 21:38:22.868723 6 log.go:172] (0xc000ea6a50) (0xc002909860) Stream added, broadcasting: 3 I0508 21:38:22.870035 6 log.go:172] (0xc000ea6a50) Reply frame received for 3 I0508 21:38:22.870073 6 log.go:172] (0xc000ea6a50) (0xc002909900) Create stream I0508 21:38:22.870085 6 log.go:172] (0xc000ea6a50) (0xc002909900) Stream added, broadcasting: 5 I0508 21:38:22.871116 6 log.go:172] (0xc000ea6a50) Reply frame received for 5 I0508 21:38:22.948831 6 log.go:172] (0xc000ea6a50) Data frame received for 3 I0508 21:38:22.948864 6 log.go:172] (0xc002909860) (3) Data frame handling I0508 21:38:22.948882 6 log.go:172] (0xc002909860) (3) Data frame sent I0508 21:38:22.948896 6 log.go:172] (0xc000ea6a50) Data frame received for 5 I0508 21:38:22.948905 6 log.go:172] (0xc002909900) (5) Data frame handling I0508 21:38:22.949492 6 log.go:172] (0xc000ea6a50) Data frame received for 3 I0508 21:38:22.949540 6 log.go:172] (0xc002909860) (3) Data frame handling I0508 21:38:22.950586 6 log.go:172] (0xc000ea6a50) Data frame received for 1 I0508 21:38:22.950602 6 log.go:172] (0xc0029097c0) (1) Data frame handling I0508 21:38:22.950620 6 log.go:172] (0xc0029097c0) (1) Data frame sent I0508 21:38:22.950638 6 log.go:172] (0xc000ea6a50) (0xc0029097c0) Stream removed, broadcasting: 1 I0508 21:38:22.950661 6 log.go:172] (0xc000ea6a50) Go away received I0508 21:38:22.950855 6 log.go:172] (0xc000ea6a50) (0xc0029097c0) Stream removed, broadcasting: 1 I0508 21:38:22.950891 6 log.go:172] (0xc000ea6a50) (0xc002909860) Stream removed, broadcasting: 3 I0508 21:38:22.950909 6 log.go:172] (0xc000ea6a50) (0xc002909900) Stream removed, broadcasting: 5 May 8 21:38:22.950: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:38:22.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4090" for this suite. • [SLOW TEST:30.547 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1216,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:38:22.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 8 21:38:23.021: INFO: Waiting up to 5m0s for pod "pod-2274313b-856f-4d7a-a320-0218c5631d87" in namespace "emptydir-6271" to be "success or failure" May 8 21:38:23.024: INFO: Pod "pod-2274313b-856f-4d7a-a320-0218c5631d87": Phase="Pending", Reason="", readiness=false. Elapsed: 3.21554ms May 8 21:38:25.034: INFO: Pod "pod-2274313b-856f-4d7a-a320-0218c5631d87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013160448s May 8 21:38:27.039: INFO: Pod "pod-2274313b-856f-4d7a-a320-0218c5631d87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017561306s STEP: Saw pod success May 8 21:38:27.039: INFO: Pod "pod-2274313b-856f-4d7a-a320-0218c5631d87" satisfied condition "success or failure" May 8 21:38:27.042: INFO: Trying to get logs from node jerma-worker pod pod-2274313b-856f-4d7a-a320-0218c5631d87 container test-container: STEP: delete the pod May 8 21:38:27.232: INFO: Waiting for pod pod-2274313b-856f-4d7a-a320-0218c5631d87 to disappear May 8 21:38:27.235: INFO: Pod pod-2274313b-856f-4d7a-a320-0218c5631d87 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:38:27.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6271" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:38:27.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-0a8f7430-591b-424b-9c40-a2ecd9007be1 STEP: Creating a pod to test consume secrets May 8 21:38:27.308: INFO: Waiting up to 5m0s for pod "pod-secrets-048cf033-d389-48bd-b6f4-c757fe274634" in namespace "secrets-4054" to be "success or failure" May 8 21:38:27.312: INFO: Pod "pod-secrets-048cf033-d389-48bd-b6f4-c757fe274634": Phase="Pending", Reason="", readiness=false. Elapsed: 3.488844ms May 8 21:38:29.442: INFO: Pod "pod-secrets-048cf033-d389-48bd-b6f4-c757fe274634": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133722009s May 8 21:38:31.447: INFO: Pod "pod-secrets-048cf033-d389-48bd-b6f4-c757fe274634": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138111896s May 8 21:38:33.559: INFO: Pod "pod-secrets-048cf033-d389-48bd-b6f4-c757fe274634": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.250215387s STEP: Saw pod success May 8 21:38:33.559: INFO: Pod "pod-secrets-048cf033-d389-48bd-b6f4-c757fe274634" satisfied condition "success or failure" May 8 21:38:33.562: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-048cf033-d389-48bd-b6f4-c757fe274634 container secret-volume-test: STEP: delete the pod May 8 21:38:34.270: INFO: Waiting for pod pod-secrets-048cf033-d389-48bd-b6f4-c757fe274634 to disappear May 8 21:38:34.288: INFO: Pod pod-secrets-048cf033-d389-48bd-b6f4-c757fe274634 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:38:34.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4054" for this suite. • [SLOW TEST:7.082 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1253,"failed":0} SSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:38:34.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:38:34.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9795" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":84,"skipped":1257,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:38:34.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:38:34.577: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:38:35.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8488" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":85,"skipped":1259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:38:35.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-6153 STEP: creating replication controller nodeport-test in namespace services-6153 I0508 21:38:35.405409 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-6153, replica count: 2 I0508 21:38:38.455801 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 21:38:41.456042 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 8 21:38:41.456: INFO: Creating new exec pod May 8 21:38:46.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6153 execpod5hhdb -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 8 21:38:49.613: INFO: stderr: "I0508 21:38:49.521072 1377 log.go:172] (0xc000599340) (0xc0008be140) Create stream\nI0508 21:38:49.521274 1377 log.go:172] (0xc000599340) (0xc0008be140) Stream added, broadcasting: 1\nI0508 21:38:49.524285 1377 log.go:172] (0xc000599340) Reply frame received for 1\nI0508 21:38:49.524345 1377 log.go:172] (0xc000599340) (0xc0008be1e0) Create stream\nI0508 21:38:49.524364 1377 log.go:172] (0xc000599340) (0xc0008be1e0) Stream added, broadcasting: 3\nI0508 21:38:49.525878 1377 log.go:172] (0xc000599340) Reply frame received for 3\nI0508 21:38:49.525939 1377 log.go:172] (0xc000599340) (0xc0006ffc20) Create stream\nI0508 21:38:49.525962 1377 log.go:172] (0xc000599340) (0xc0006ffc20) Stream added, broadcasting: 5\nI0508 21:38:49.526894 1377 log.go:172] (0xc000599340) Reply frame received for 5\nI0508 21:38:49.606216 1377 log.go:172] (0xc000599340) Data frame received for 3\nI0508 21:38:49.606256 1377 log.go:172] (0xc0008be1e0) (3) Data frame handling\nI0508 21:38:49.606335 1377 log.go:172] (0xc000599340) Data frame received for 5\nI0508 21:38:49.606399 1377 log.go:172] (0xc0006ffc20) (5) Data frame handling\nI0508 21:38:49.606431 1377 log.go:172] (0xc0006ffc20) (5) Data frame sent\nI0508 21:38:49.606450 1377 log.go:172] (0xc000599340) Data frame received for 5\nI0508 21:38:49.606465 1377 log.go:172] (0xc0006ffc20) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0508 21:38:49.607795 1377 log.go:172] (0xc000599340) Data frame received for 1\nI0508 21:38:49.607819 1377 log.go:172] (0xc0008be140) (1) Data frame handling\nI0508 21:38:49.607846 1377 log.go:172] (0xc0008be140) (1) Data frame sent\nI0508 21:38:49.607874 1377 log.go:172] (0xc000599340) (0xc0008be140) Stream removed, broadcasting: 1\nI0508 21:38:49.607922 1377 log.go:172] (0xc000599340) Go away received\nI0508 21:38:49.608240 1377 log.go:172] (0xc000599340) (0xc0008be140) Stream removed, broadcasting: 1\nI0508 21:38:49.608261 1377 log.go:172] (0xc000599340) (0xc0008be1e0) Stream removed, broadcasting: 3\nI0508 21:38:49.608271 1377 log.go:172] (0xc000599340) (0xc0006ffc20) Stream removed, broadcasting: 5\n" May 8 21:38:49.613: INFO: stdout: "" May 8 21:38:49.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6153 execpod5hhdb -- /bin/sh -x -c nc -zv -t -w 2 10.100.178.170 80' May 8 21:38:49.799: INFO: stderr: "I0508 21:38:49.733618 1410 log.go:172] (0xc000aec000) (0xc00077d5e0) Create stream\nI0508 21:38:49.733695 1410 log.go:172] (0xc000aec000) (0xc00077d5e0) Stream added, broadcasting: 1\nI0508 21:38:49.736100 1410 log.go:172] (0xc000aec000) Reply frame received for 1\nI0508 21:38:49.736141 1410 log.go:172] (0xc000aec000) (0xc00077d680) Create stream\nI0508 21:38:49.736168 1410 log.go:172] (0xc000aec000) (0xc00077d680) Stream added, broadcasting: 3\nI0508 21:38:49.736949 1410 log.go:172] (0xc000aec000) Reply frame received for 3\nI0508 21:38:49.736995 1410 log.go:172] (0xc000aec000) (0xc000914000) Create stream\nI0508 21:38:49.737009 1410 log.go:172] (0xc000aec000) (0xc000914000) Stream added, broadcasting: 5\nI0508 21:38:49.738381 1410 log.go:172] (0xc000aec000) Reply frame received for 5\nI0508 21:38:49.793694 1410 log.go:172] (0xc000aec000) Data frame received for 3\nI0508 21:38:49.793734 1410 log.go:172] (0xc00077d680) (3) Data frame handling\nI0508 21:38:49.793770 1410 log.go:172] (0xc000aec000) Data frame received for 5\nI0508 21:38:49.793782 1410 log.go:172] (0xc000914000) (5) Data frame handling\nI0508 21:38:49.793788 1410 log.go:172] (0xc000914000) (5) Data frame sent\nI0508 21:38:49.793794 1410 log.go:172] (0xc000aec000) Data frame received for 5\nI0508 21:38:49.793798 1410 log.go:172] (0xc000914000) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.178.170 80\nConnection to 10.100.178.170 80 port [tcp/http] succeeded!\nI0508 21:38:49.795153 1410 log.go:172] (0xc000aec000) Data frame received for 1\nI0508 21:38:49.795181 1410 log.go:172] (0xc00077d5e0) (1) Data frame handling\nI0508 21:38:49.795194 1410 log.go:172] (0xc00077d5e0) (1) Data frame sent\nI0508 21:38:49.795207 1410 log.go:172] (0xc000aec000) (0xc00077d5e0) Stream removed, broadcasting: 1\nI0508 21:38:49.795267 1410 log.go:172] (0xc000aec000) Go away received\nI0508 21:38:49.795476 1410 log.go:172] (0xc000aec000) (0xc00077d5e0) Stream removed, broadcasting: 1\nI0508 21:38:49.795492 1410 log.go:172] (0xc000aec000) (0xc00077d680) Stream removed, broadcasting: 3\nI0508 21:38:49.795503 1410 log.go:172] (0xc000aec000) (0xc000914000) Stream removed, broadcasting: 5\n" May 8 21:38:49.799: INFO: stdout: "" May 8 21:38:49.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6153 execpod5hhdb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31003' May 8 21:38:50.004: INFO: stderr: "I0508 21:38:49.943551 1431 log.go:172] (0xc0007ce630) (0xc0007ea1e0) Create stream\nI0508 21:38:49.943600 1431 log.go:172] (0xc0007ce630) (0xc0007ea1e0) Stream added, broadcasting: 1\nI0508 21:38:49.945942 1431 log.go:172] (0xc0007ce630) Reply frame received for 1\nI0508 21:38:49.945965 1431 log.go:172] (0xc0007ce630) (0xc0006bf900) Create stream\nI0508 21:38:49.945973 1431 log.go:172] (0xc0007ce630) (0xc0006bf900) Stream added, broadcasting: 3\nI0508 21:38:49.946772 1431 log.go:172] (0xc0007ce630) Reply frame received for 3\nI0508 21:38:49.946810 1431 log.go:172] (0xc0007ce630) (0xc0007ea280) Create stream\nI0508 21:38:49.946830 1431 log.go:172] (0xc0007ce630) (0xc0007ea280) Stream added, broadcasting: 5\nI0508 21:38:49.947475 1431 log.go:172] (0xc0007ce630) Reply frame received for 5\nI0508 21:38:49.996219 1431 log.go:172] (0xc0007ce630) Data frame received for 5\nI0508 21:38:49.996248 1431 log.go:172] (0xc0007ea280) (5) Data frame handling\nI0508 21:38:49.996267 1431 log.go:172] (0xc0007ea280) (5) Data frame sent\nI0508 21:38:49.996278 1431 log.go:172] (0xc0007ce630) Data frame received for 5\nI0508 21:38:49.996288 1431 log.go:172] (0xc0007ea280) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31003\nConnection to 172.17.0.10 31003 port [tcp/31003] succeeded!\nI0508 21:38:49.996319 1431 log.go:172] (0xc0007ea280) (5) Data frame sent\nI0508 21:38:49.997067 1431 log.go:172] (0xc0007ce630) Data frame received for 3\nI0508 21:38:49.997288 1431 log.go:172] (0xc0006bf900) (3) Data frame handling\nI0508 21:38:49.997369 1431 log.go:172] (0xc0007ce630) Data frame received for 5\nI0508 21:38:49.997419 1431 log.go:172] (0xc0007ea280) (5) Data frame handling\nI0508 21:38:49.999498 1431 log.go:172] (0xc0007ce630) Data frame received for 1\nI0508 21:38:49.999536 1431 log.go:172] (0xc0007ea1e0) (1) Data frame handling\nI0508 21:38:49.999568 1431 log.go:172] (0xc0007ea1e0) (1) Data frame sent\nI0508 21:38:49.999591 1431 log.go:172] (0xc0007ce630) (0xc0007ea1e0) Stream removed, broadcasting: 1\nI0508 21:38:49.999680 1431 log.go:172] (0xc0007ce630) Go away received\nI0508 21:38:50.000127 1431 log.go:172] (0xc0007ce630) (0xc0007ea1e0) Stream removed, broadcasting: 1\nI0508 21:38:50.000170 1431 log.go:172] (0xc0007ce630) (0xc0006bf900) Stream removed, broadcasting: 3\nI0508 21:38:50.000187 1431 log.go:172] (0xc0007ce630) (0xc0007ea280) Stream removed, broadcasting: 5\n" May 8 21:38:50.004: INFO: stdout: "" May 8 21:38:50.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6153 execpod5hhdb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31003' May 8 21:38:50.203: INFO: stderr: "I0508 21:38:50.131718 1451 log.go:172] (0xc0000f42c0) (0xc00095c0a0) Create stream\nI0508 21:38:50.131782 1451 log.go:172] (0xc0000f42c0) (0xc00095c0a0) Stream added, broadcasting: 1\nI0508 21:38:50.135810 1451 log.go:172] (0xc0000f42c0) Reply frame received for 1\nI0508 21:38:50.135853 1451 log.go:172] (0xc0000f42c0) (0xc0005a1720) Create stream\nI0508 21:38:50.135866 1451 log.go:172] (0xc0000f42c0) (0xc0005a1720) Stream added, broadcasting: 3\nI0508 21:38:50.136855 1451 log.go:172] (0xc0000f42c0) Reply frame received for 3\nI0508 21:38:50.136901 1451 log.go:172] (0xc0000f42c0) (0xc00095c140) Create stream\nI0508 21:38:50.136913 1451 log.go:172] (0xc0000f42c0) (0xc00095c140) Stream added, broadcasting: 5\nI0508 21:38:50.138045 1451 log.go:172] (0xc0000f42c0) Reply frame received for 5\nI0508 21:38:50.196632 1451 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0508 21:38:50.196667 1451 log.go:172] (0xc0005a1720) (3) Data frame handling\nI0508 21:38:50.196690 1451 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0508 21:38:50.196700 1451 log.go:172] (0xc00095c140) (5) Data frame handling\nI0508 21:38:50.196708 1451 log.go:172] (0xc00095c140) (5) Data frame sent\nI0508 21:38:50.196715 1451 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0508 21:38:50.196720 1451 log.go:172] (0xc00095c140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31003\nConnection to 172.17.0.8 31003 port [tcp/31003] succeeded!\nI0508 21:38:50.198125 1451 log.go:172] (0xc0000f42c0) Data frame received for 1\nI0508 21:38:50.198139 1451 log.go:172] (0xc00095c0a0) (1) Data frame handling\nI0508 21:38:50.198148 1451 log.go:172] (0xc00095c0a0) (1) Data frame sent\nI0508 21:38:50.198159 1451 log.go:172] (0xc0000f42c0) (0xc00095c0a0) Stream removed, broadcasting: 1\nI0508 21:38:50.198173 1451 log.go:172] (0xc0000f42c0) Go away received\nI0508 21:38:50.198623 1451 log.go:172] (0xc0000f42c0) (0xc00095c0a0) Stream removed, broadcasting: 1\nI0508 21:38:50.198656 1451 log.go:172] (0xc0000f42c0) (0xc0005a1720) Stream removed, broadcasting: 3\nI0508 21:38:50.198669 1451 log.go:172] (0xc0000f42c0) (0xc00095c140) Stream removed, broadcasting: 5\n" May 8 21:38:50.203: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:38:50.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6153" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.984 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":86,"skipped":1286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:38:50.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 8 21:38:50.278: INFO: Waiting up to 5m0s for pod "pod-cdaf7913-5612-4b17-8c83-06906956c18c" in namespace "emptydir-9423" to be "success or failure" May 8 21:38:50.328: INFO: Pod "pod-cdaf7913-5612-4b17-8c83-06906956c18c": Phase="Pending", Reason="", readiness=false. Elapsed: 49.860043ms May 8 21:38:52.331: INFO: Pod "pod-cdaf7913-5612-4b17-8c83-06906956c18c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052909633s May 8 21:38:54.334: INFO: Pod "pod-cdaf7913-5612-4b17-8c83-06906956c18c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056455727s STEP: Saw pod success May 8 21:38:54.334: INFO: Pod "pod-cdaf7913-5612-4b17-8c83-06906956c18c" satisfied condition "success or failure" May 8 21:38:54.336: INFO: Trying to get logs from node jerma-worker pod pod-cdaf7913-5612-4b17-8c83-06906956c18c container test-container: STEP: delete the pod May 8 21:38:54.356: INFO: Waiting for pod pod-cdaf7913-5612-4b17-8c83-06906956c18c to disappear May 8 21:38:54.367: INFO: Pod pod-cdaf7913-5612-4b17-8c83-06906956c18c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:38:54.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9423" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1319,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:38:54.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-a61be94d-91c6-48aa-9ea3-191599c00ea3 May 8 21:38:54.525: INFO: Pod name my-hostname-basic-a61be94d-91c6-48aa-9ea3-191599c00ea3: Found 0 pods out of 1 May 8 21:38:59.528: INFO: Pod name my-hostname-basic-a61be94d-91c6-48aa-9ea3-191599c00ea3: Found 1 pods out of 1 May 8 21:38:59.528: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-a61be94d-91c6-48aa-9ea3-191599c00ea3" are running May 8 21:38:59.531: INFO: Pod "my-hostname-basic-a61be94d-91c6-48aa-9ea3-191599c00ea3-cq6fm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 21:38:54 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 21:38:57 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 21:38:57 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 21:38:54 +0000 UTC Reason: Message:}]) May 8 21:38:59.531: INFO: Trying to dial the pod May 8 21:39:04.540: INFO: Controller my-hostname-basic-a61be94d-91c6-48aa-9ea3-191599c00ea3: Got expected result from replica 1 [my-hostname-basic-a61be94d-91c6-48aa-9ea3-191599c00ea3-cq6fm]: "my-hostname-basic-a61be94d-91c6-48aa-9ea3-191599c00ea3-cq6fm", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:39:04.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4624" for this suite. • [SLOW TEST:10.174 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":88,"skipped":1322,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:39:04.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-1198.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-1198.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1198.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-1198.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-1198.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1198.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 21:39:10.704: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:10.708: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:10.711: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:10.714: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:10.723: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:10.727: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:10.730: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:10.734: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:10.740: INFO: Lookups using dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1198.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1198.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local jessie_udp@dns-test-service-2.dns-1198.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1198.svc.cluster.local] May 8 21:39:15.751: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:15.760: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:15.762: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:15.764: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:15.789: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:15.791: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:15.794: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:15.797: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:15.802: INFO: Lookups using dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1198.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1198.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local jessie_udp@dns-test-service-2.dns-1198.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1198.svc.cluster.local] May 8 21:39:20.745: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:20.749: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:20.752: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:20.755: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:20.763: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:20.766: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:20.769: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:20.771: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:20.777: INFO: Lookups using dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1198.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1198.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local jessie_udp@dns-test-service-2.dns-1198.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1198.svc.cluster.local] May 8 21:39:25.745: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:25.748: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:25.752: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:25.755: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:25.765: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:25.768: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:25.770: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:25.773: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:25.779: INFO: Lookups using dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1198.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1198.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local jessie_udp@dns-test-service-2.dns-1198.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1198.svc.cluster.local] May 8 21:39:30.746: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:30.750: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:30.754: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:30.757: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:30.766: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:30.769: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:30.772: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:30.775: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:30.781: INFO: Lookups using dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1198.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1198.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local jessie_udp@dns-test-service-2.dns-1198.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1198.svc.cluster.local] May 8 21:39:35.746: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:35.750: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:35.754: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:35.757: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:35.767: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:35.770: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:35.773: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:35.776: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:35.783: INFO: Lookups using dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1198.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1198.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1198.svc.cluster.local jessie_udp@dns-test-service-2.dns-1198.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1198.svc.cluster.local] May 8 21:39:40.773: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1198.svc.cluster.local from pod dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539: the server could not find the requested resource (get pods dns-test-db794d65-d973-4247-99c4-dbc8907b4539) May 8 21:39:40.802: INFO: Lookups using dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539 failed for: [wheezy_udp@dns-test-service-2.dns-1198.svc.cluster.local] May 8 21:39:45.783: INFO: DNS probes using dns-1198/dns-test-db794d65-d973-4247-99c4-dbc8907b4539 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:39:46.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1198" for this suite. • [SLOW TEST:41.772 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":89,"skipped":1332,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:39:46.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-72221d10-28f4-4d1b-b3b6-36c040df84d3 STEP: Creating a pod to test consume secrets May 8 21:39:47.732: INFO: Waiting up to 5m0s for pod "pod-secrets-8f9045e8-503c-405d-ae5f-36ee07d2d2f2" in namespace "secrets-9721" to be "success or failure" May 8 21:39:47.750: INFO: Pod "pod-secrets-8f9045e8-503c-405d-ae5f-36ee07d2d2f2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.312962ms May 8 21:39:49.779: INFO: Pod "pod-secrets-8f9045e8-503c-405d-ae5f-36ee07d2d2f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046936588s May 8 21:39:51.788: INFO: Pod "pod-secrets-8f9045e8-503c-405d-ae5f-36ee07d2d2f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055151364s STEP: Saw pod success May 8 21:39:51.788: INFO: Pod "pod-secrets-8f9045e8-503c-405d-ae5f-36ee07d2d2f2" satisfied condition "success or failure" May 8 21:39:51.791: INFO: Trying to get logs from node jerma-worker pod pod-secrets-8f9045e8-503c-405d-ae5f-36ee07d2d2f2 container secret-volume-test: STEP: delete the pod May 8 21:39:51.830: INFO: Waiting for pod pod-secrets-8f9045e8-503c-405d-ae5f-36ee07d2d2f2 to disappear May 8 21:39:51.835: INFO: Pod pod-secrets-8f9045e8-503c-405d-ae5f-36ee07d2d2f2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:39:51.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9721" for this suite. • [SLOW TEST:5.523 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1335,"failed":0} [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:39:51.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 8 21:39:51.880: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:39:51.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3938" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":91,"skipped":1335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:39:51.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 8 21:39:52.117: INFO: Pod name pod-release: Found 0 pods out of 1 May 8 21:39:57.138: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:39:57.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9915" for this suite. • [SLOW TEST:5.256 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":92,"skipped":1366,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:39:57.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:39:57.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3464" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1373,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:39:57.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 8 21:40:05.617: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 8 21:40:20.765: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:40:20.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2584" for this suite. • [SLOW TEST:23.349 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":94,"skipped":1377,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:40:20.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 8 21:40:20.851: INFO: Waiting up to 5m0s for pod "pod-097e06f6-1800-45cf-8065-38c4e8d3497d" in namespace "emptydir-2737" to be "success or failure" May 8 21:40:20.854: INFO: Pod "pod-097e06f6-1800-45cf-8065-38c4e8d3497d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.747953ms May 8 21:40:22.859: INFO: Pod "pod-097e06f6-1800-45cf-8065-38c4e8d3497d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008374143s May 8 21:40:24.863: INFO: Pod "pod-097e06f6-1800-45cf-8065-38c4e8d3497d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012104281s STEP: Saw pod success May 8 21:40:24.863: INFO: Pod "pod-097e06f6-1800-45cf-8065-38c4e8d3497d" satisfied condition "success or failure" May 8 21:40:24.865: INFO: Trying to get logs from node jerma-worker2 pod pod-097e06f6-1800-45cf-8065-38c4e8d3497d container test-container: STEP: delete the pod May 8 21:40:24.899: INFO: Waiting for pod pod-097e06f6-1800-45cf-8065-38c4e8d3497d to disappear May 8 21:40:24.908: INFO: Pod pod-097e06f6-1800-45cf-8065-38c4e8d3497d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:40:24.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2737" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:40:24.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 8 21:40:30.022: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:40:31.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8282" for this suite. • [SLOW TEST:6.221 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":96,"skipped":1400,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:40:31.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 8 21:40:31.264: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:40:45.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8183" for this suite. • [SLOW TEST:14.352 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":97,"skipped":1403,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:40:45.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:40:45.626: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 8 21:40:47.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1332 create -f -' May 8 21:40:52.687: INFO: stderr: "" May 8 21:40:52.688: INFO: stdout: "e2e-test-crd-publish-openapi-156-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 8 21:40:52.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1332 delete e2e-test-crd-publish-openapi-156-crds test-cr' May 8 21:40:52.795: INFO: stderr: "" May 8 21:40:52.795: INFO: stdout: "e2e-test-crd-publish-openapi-156-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 8 21:40:52.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1332 apply -f -' May 8 21:40:53.113: INFO: stderr: "" May 8 21:40:53.113: INFO: stdout: "e2e-test-crd-publish-openapi-156-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 8 21:40:53.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1332 delete e2e-test-crd-publish-openapi-156-crds test-cr' May 8 21:40:53.220: INFO: stderr: "" May 8 21:40:53.220: INFO: stdout: "e2e-test-crd-publish-openapi-156-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 8 21:40:53.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-156-crds' May 8 21:40:53.518: INFO: stderr: "" May 8 21:40:53.518: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-156-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:40:56.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1332" for this suite. • [SLOW TEST:10.910 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":98,"skipped":1419,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:40:56.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 8 21:41:00.585: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:41:00.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7274" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1420,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:41:00.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-5bh2 STEP: Creating a pod to test atomic-volume-subpath May 8 21:41:00.727: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5bh2" in namespace "subpath-8612" to be "success or failure" May 8 21:41:00.731: INFO: Pod "pod-subpath-test-configmap-5bh2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.003987ms May 8 21:41:02.735: INFO: Pod "pod-subpath-test-configmap-5bh2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008310128s May 8 21:41:04.739: INFO: Pod "pod-subpath-test-configmap-5bh2": Phase="Running", Reason="", readiness=true. Elapsed: 4.012709968s May 8 21:41:06.744: INFO: Pod "pod-subpath-test-configmap-5bh2": Phase="Running", Reason="", readiness=true. Elapsed: 6.016976285s May 8 21:41:08.748: INFO: Pod "pod-subpath-test-configmap-5bh2": Phase="Running", Reason="", readiness=true. Elapsed: 8.021799245s May 8 21:41:10.753: INFO: Pod "pod-subpath-test-configmap-5bh2": Phase="Running", Reason="", readiness=true. Elapsed: 10.02674978s May 8 21:41:12.758: INFO: Pod "pod-subpath-test-configmap-5bh2": Phase="Running", Reason="", readiness=true. Elapsed: 12.031339692s May 8 21:41:14.763: INFO: Pod "pod-subpath-test-configmap-5bh2": Phase="Running", Reason="", readiness=true. Elapsed: 14.035817991s May 8 21:41:16.767: INFO: Pod "pod-subpath-test-configmap-5bh2": Phase="Running", Reason="", readiness=true. Elapsed: 16.040202509s May 8 21:41:18.771: INFO: Pod "pod-subpath-test-configmap-5bh2": Phase="Running", Reason="", readiness=true. Elapsed: 18.044058376s May 8 21:41:20.775: INFO: Pod "pod-subpath-test-configmap-5bh2": Phase="Running", Reason="", readiness=true. Elapsed: 20.048313705s May 8 21:41:22.779: INFO: Pod "pod-subpath-test-configmap-5bh2": Phase="Running", Reason="", readiness=true. Elapsed: 22.052554245s May 8 21:41:24.817: INFO: Pod "pod-subpath-test-configmap-5bh2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.09079133s STEP: Saw pod success May 8 21:41:24.818: INFO: Pod "pod-subpath-test-configmap-5bh2" satisfied condition "success or failure" May 8 21:41:24.820: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-5bh2 container test-container-subpath-configmap-5bh2: STEP: delete the pod May 8 21:41:24.857: INFO: Waiting for pod pod-subpath-test-configmap-5bh2 to disappear May 8 21:41:24.874: INFO: Pod pod-subpath-test-configmap-5bh2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-5bh2 May 8 21:41:24.874: INFO: Deleting pod "pod-subpath-test-configmap-5bh2" in namespace "subpath-8612" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:41:24.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8612" for this suite. • [SLOW TEST:24.248 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":100,"skipped":1422,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:41:24.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 8 21:41:25.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2888' May 8 21:41:25.113: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 8 21:41:25.113: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 8 21:41:27.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2888' May 8 21:41:27.304: INFO: stderr: "" May 8 21:41:27.305: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:41:27.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2888" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":101,"skipped":1425,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:41:27.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 21:41:27.794: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 21:41:29.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570887, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570887, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570887, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724570887, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 21:41:32.862: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:41:32.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-770" for this suite. STEP: Destroying namespace "webhook-770-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.692 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":102,"skipped":1441,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:41:33.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-a76320fe-6f26-43c8-8977-1b1355ea9873 in namespace container-probe-8444 May 8 21:41:37.248: INFO: Started pod test-webserver-a76320fe-6f26-43c8-8977-1b1355ea9873 in namespace container-probe-8444 STEP: checking the pod's current state and verifying that restartCount is present May 8 21:41:37.251: INFO: Initial restart count of pod test-webserver-a76320fe-6f26-43c8-8977-1b1355ea9873 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:45:38.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8444" for this suite. • [SLOW TEST:245.143 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1445,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:45:38.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-9862 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9862 STEP: Deleting pre-stop pod May 8 21:45:51.686: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:45:51.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9862" for this suite. • [SLOW TEST:13.597 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":104,"skipped":1473,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:45:51.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8140.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8140.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8140.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8140.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8140.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8140.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 21:45:58.134: INFO: DNS probes using dns-8140/dns-test-e978358a-f742-41c8-9234-0171618c755f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:45:58.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8140" for this suite. • [SLOW TEST:6.652 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":105,"skipped":1485,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:45:58.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-820a1f04-8050-4b37-8435-50389bb9e6d2 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:45:58.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5370" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":106,"skipped":1522,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:45:58.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 8 21:45:58.960: INFO: Waiting up to 5m0s for pod "pod-7121f2d2-f8a3-4171-8a5d-738680ebdddc" in namespace "emptydir-5557" to be "success or failure" May 8 21:45:58.966: INFO: Pod "pod-7121f2d2-f8a3-4171-8a5d-738680ebdddc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.914313ms May 8 21:46:01.005: INFO: Pod "pod-7121f2d2-f8a3-4171-8a5d-738680ebdddc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045269879s May 8 21:46:03.026: INFO: Pod "pod-7121f2d2-f8a3-4171-8a5d-738680ebdddc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065926585s STEP: Saw pod success May 8 21:46:03.026: INFO: Pod "pod-7121f2d2-f8a3-4171-8a5d-738680ebdddc" satisfied condition "success or failure" May 8 21:46:03.052: INFO: Trying to get logs from node jerma-worker pod pod-7121f2d2-f8a3-4171-8a5d-738680ebdddc container test-container: STEP: delete the pod May 8 21:46:03.186: INFO: Waiting for pod pod-7121f2d2-f8a3-4171-8a5d-738680ebdddc to disappear May 8 21:46:03.192: INFO: Pod pod-7121f2d2-f8a3-4171-8a5d-738680ebdddc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:46:03.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5557" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:46:03.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 8 21:46:13.350: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6744 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:46:13.350: INFO: >>> kubeConfig: /root/.kube/config I0508 21:46:13.384498 6 log.go:172] (0xc0016c88f0) (0xc001ebefa0) Create stream I0508 21:46:13.384526 6 log.go:172] (0xc0016c88f0) (0xc001ebefa0) Stream added, broadcasting: 1 I0508 21:46:13.386385 6 log.go:172] (0xc0016c88f0) Reply frame received for 1 I0508 21:46:13.386427 6 log.go:172] (0xc0016c88f0) (0xc001ebf0e0) Create stream I0508 21:46:13.386450 6 log.go:172] (0xc0016c88f0) (0xc001ebf0e0) Stream added, broadcasting: 3 I0508 21:46:13.387557 6 log.go:172] (0xc0016c88f0) Reply frame received for 3 I0508 21:46:13.387601 6 log.go:172] (0xc0016c88f0) (0xc002964e60) Create stream I0508 21:46:13.387620 6 log.go:172] (0xc0016c88f0) (0xc002964e60) Stream added, broadcasting: 5 I0508 21:46:13.388584 6 log.go:172] (0xc0016c88f0) Reply frame received for 5 I0508 21:46:13.483004 6 log.go:172] (0xc0016c88f0) Data frame received for 3 I0508 21:46:13.483029 6 log.go:172] (0xc001ebf0e0) (3) Data frame handling I0508 21:46:13.483041 6 log.go:172] (0xc001ebf0e0) (3) Data frame sent I0508 21:46:13.483137 6 log.go:172] (0xc0016c88f0) Data frame received for 5 I0508 21:46:13.483183 6 log.go:172] (0xc002964e60) (5) Data frame handling I0508 21:46:13.483203 6 log.go:172] (0xc0016c88f0) Data frame received for 3 I0508 21:46:13.483208 6 log.go:172] (0xc001ebf0e0) (3) Data frame handling I0508 21:46:13.484642 6 log.go:172] (0xc0016c88f0) Data frame received for 1 I0508 21:46:13.484656 6 log.go:172] (0xc001ebefa0) (1) Data frame handling I0508 21:46:13.484669 6 log.go:172] (0xc001ebefa0) (1) Data frame sent I0508 21:46:13.484712 6 log.go:172] (0xc0016c88f0) (0xc001ebefa0) Stream removed, broadcasting: 1 I0508 21:46:13.484777 6 log.go:172] (0xc0016c88f0) (0xc001ebefa0) Stream removed, broadcasting: 1 I0508 21:46:13.484790 6 log.go:172] (0xc0016c88f0) (0xc001ebf0e0) Stream removed, broadcasting: 3 I0508 21:46:13.484901 6 log.go:172] (0xc0016c88f0) Go away received I0508 21:46:13.484957 6 log.go:172] (0xc0016c88f0) (0xc002964e60) Stream removed, broadcasting: 5 May 8 21:46:13.484: INFO: Exec stderr: "" May 8 21:46:13.485: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6744 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:46:13.485: INFO: >>> kubeConfig: /root/.kube/config I0508 21:46:13.538021 6 log.go:172] (0xc0016c8f20) (0xc001ebf360) Create stream I0508 21:46:13.538054 6 log.go:172] (0xc0016c8f20) (0xc001ebf360) Stream added, broadcasting: 1 I0508 21:46:13.540217 6 log.go:172] (0xc0016c8f20) Reply frame received for 1 I0508 21:46:13.540258 6 log.go:172] (0xc0016c8f20) (0xc00231a000) Create stream I0508 21:46:13.540268 6 log.go:172] (0xc0016c8f20) (0xc00231a000) Stream added, broadcasting: 3 I0508 21:46:13.541469 6 log.go:172] (0xc0016c8f20) Reply frame received for 3 I0508 21:46:13.541521 6 log.go:172] (0xc0016c8f20) (0xc001ebf400) Create stream I0508 21:46:13.541539 6 log.go:172] (0xc0016c8f20) (0xc001ebf400) Stream added, broadcasting: 5 I0508 21:46:13.542501 6 log.go:172] (0xc0016c8f20) Reply frame received for 5 I0508 21:46:13.603960 6 log.go:172] (0xc0016c8f20) Data frame received for 5 I0508 21:46:13.604001 6 log.go:172] (0xc001ebf400) (5) Data frame handling I0508 21:46:13.604025 6 log.go:172] (0xc0016c8f20) Data frame received for 3 I0508 21:46:13.604038 6 log.go:172] (0xc00231a000) (3) Data frame handling I0508 21:46:13.604083 6 log.go:172] (0xc00231a000) (3) Data frame sent I0508 21:46:13.604102 6 log.go:172] (0xc0016c8f20) Data frame received for 3 I0508 21:46:13.604112 6 log.go:172] (0xc00231a000) (3) Data frame handling I0508 21:46:13.605930 6 log.go:172] (0xc0016c8f20) Data frame received for 1 I0508 21:46:13.605969 6 log.go:172] (0xc001ebf360) (1) Data frame handling I0508 21:46:13.605994 6 log.go:172] (0xc001ebf360) (1) Data frame sent I0508 21:46:13.606015 6 log.go:172] (0xc0016c8f20) (0xc001ebf360) Stream removed, broadcasting: 1 I0508 21:46:13.606037 6 log.go:172] (0xc0016c8f20) Go away received I0508 21:46:13.606147 6 log.go:172] (0xc0016c8f20) (0xc001ebf360) Stream removed, broadcasting: 1 I0508 21:46:13.606167 6 log.go:172] (0xc0016c8f20) (0xc00231a000) Stream removed, broadcasting: 3 I0508 21:46:13.606173 6 log.go:172] (0xc0016c8f20) (0xc001ebf400) Stream removed, broadcasting: 5 May 8 21:46:13.606: INFO: Exec stderr: "" May 8 21:46:13.606: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6744 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:46:13.606: INFO: >>> kubeConfig: /root/.kube/config I0508 21:46:13.641072 6 log.go:172] (0xc0017324d0) (0xc0024fa280) Create stream I0508 21:46:13.641253 6 log.go:172] (0xc0017324d0) (0xc0024fa280) Stream added, broadcasting: 1 I0508 21:46:13.643619 6 log.go:172] (0xc0017324d0) Reply frame received for 1 I0508 21:46:13.643659 6 log.go:172] (0xc0017324d0) (0xc002964f00) Create stream I0508 21:46:13.643673 6 log.go:172] (0xc0017324d0) (0xc002964f00) Stream added, broadcasting: 3 I0508 21:46:13.644630 6 log.go:172] (0xc0017324d0) Reply frame received for 3 I0508 21:46:13.644678 6 log.go:172] (0xc0017324d0) (0xc002964fa0) Create stream I0508 21:46:13.644692 6 log.go:172] (0xc0017324d0) (0xc002964fa0) Stream added, broadcasting: 5 I0508 21:46:13.646387 6 log.go:172] (0xc0017324d0) Reply frame received for 5 I0508 21:46:13.710993 6 log.go:172] (0xc0017324d0) Data frame received for 5 I0508 21:46:13.711034 6 log.go:172] (0xc002964fa0) (5) Data frame handling I0508 21:46:13.711056 6 log.go:172] (0xc0017324d0) Data frame received for 3 I0508 21:46:13.711066 6 log.go:172] (0xc002964f00) (3) Data frame handling I0508 21:46:13.711078 6 log.go:172] (0xc002964f00) (3) Data frame sent I0508 21:46:13.711087 6 log.go:172] (0xc0017324d0) Data frame received for 3 I0508 21:46:13.711096 6 log.go:172] (0xc002964f00) (3) Data frame handling I0508 21:46:13.712447 6 log.go:172] (0xc0017324d0) Data frame received for 1 I0508 21:46:13.712481 6 log.go:172] (0xc0024fa280) (1) Data frame handling I0508 21:46:13.712491 6 log.go:172] (0xc0024fa280) (1) Data frame sent I0508 21:46:13.712504 6 log.go:172] (0xc0017324d0) (0xc0024fa280) Stream removed, broadcasting: 1 I0508 21:46:13.712609 6 log.go:172] (0xc0017324d0) (0xc0024fa280) Stream removed, broadcasting: 1 I0508 21:46:13.712624 6 log.go:172] (0xc0017324d0) (0xc002964f00) Stream removed, broadcasting: 3 I0508 21:46:13.712802 6 log.go:172] (0xc0017324d0) (0xc002964fa0) Stream removed, broadcasting: 5 I0508 21:46:13.712860 6 log.go:172] (0xc0017324d0) Go away received May 8 21:46:13.712: INFO: Exec stderr: "" May 8 21:46:13.712: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6744 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:46:13.712: INFO: >>> kubeConfig: /root/.kube/config I0508 21:46:13.745470 6 log.go:172] (0xc001732c60) (0xc0024fa460) Create stream I0508 21:46:13.745519 6 log.go:172] (0xc001732c60) (0xc0024fa460) Stream added, broadcasting: 1 I0508 21:46:13.747687 6 log.go:172] (0xc001732c60) Reply frame received for 1 I0508 21:46:13.747736 6 log.go:172] (0xc001732c60) (0xc0024fa500) Create stream I0508 21:46:13.747748 6 log.go:172] (0xc001732c60) (0xc0024fa500) Stream added, broadcasting: 3 I0508 21:46:13.748861 6 log.go:172] (0xc001732c60) Reply frame received for 3 I0508 21:46:13.748896 6 log.go:172] (0xc001732c60) (0xc00231a0a0) Create stream I0508 21:46:13.748917 6 log.go:172] (0xc001732c60) (0xc00231a0a0) Stream added, broadcasting: 5 I0508 21:46:13.750191 6 log.go:172] (0xc001732c60) Reply frame received for 5 I0508 21:46:13.829512 6 log.go:172] (0xc001732c60) Data frame received for 5 I0508 21:46:13.829580 6 log.go:172] (0xc00231a0a0) (5) Data frame handling I0508 21:46:13.829626 6 log.go:172] (0xc001732c60) Data frame received for 3 I0508 21:46:13.829652 6 log.go:172] (0xc0024fa500) (3) Data frame handling I0508 21:46:13.829679 6 log.go:172] (0xc0024fa500) (3) Data frame sent I0508 21:46:13.829698 6 log.go:172] (0xc001732c60) Data frame received for 3 I0508 21:46:13.829722 6 log.go:172] (0xc0024fa500) (3) Data frame handling I0508 21:46:13.830941 6 log.go:172] (0xc001732c60) Data frame received for 1 I0508 21:46:13.830964 6 log.go:172] (0xc0024fa460) (1) Data frame handling I0508 21:46:13.830979 6 log.go:172] (0xc0024fa460) (1) Data frame sent I0508 21:46:13.830998 6 log.go:172] (0xc001732c60) (0xc0024fa460) Stream removed, broadcasting: 1 I0508 21:46:13.831083 6 log.go:172] (0xc001732c60) (0xc0024fa460) Stream removed, broadcasting: 1 I0508 21:46:13.831095 6 log.go:172] (0xc001732c60) (0xc0024fa500) Stream removed, broadcasting: 3 I0508 21:46:13.831221 6 log.go:172] (0xc001732c60) (0xc00231a0a0) Stream removed, broadcasting: 5 May 8 21:46:13.831: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 8 21:46:13.831: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6744 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:46:13.831: INFO: >>> kubeConfig: /root/.kube/config I0508 21:46:13.861551 6 log.go:172] (0xc0027bf080) (0xc00231a3c0) Create stream I0508 21:46:13.861577 6 log.go:172] (0xc0027bf080) (0xc00231a3c0) Stream added, broadcasting: 1 I0508 21:46:13.863729 6 log.go:172] (0xc0027bf080) Reply frame received for 1 I0508 21:46:13.863779 6 log.go:172] (0xc0027bf080) (0xc002965040) Create stream I0508 21:46:13.863794 6 log.go:172] (0xc0027bf080) (0xc002965040) Stream added, broadcasting: 3 I0508 21:46:13.864690 6 log.go:172] (0xc0027bf080) Reply frame received for 3 I0508 21:46:13.864722 6 log.go:172] (0xc0027bf080) (0xc00231a460) Create stream I0508 21:46:13.864734 6 log.go:172] (0xc0027bf080) (0xc00231a460) Stream added, broadcasting: 5 I0508 21:46:13.865921 6 log.go:172] (0xc0027bf080) Reply frame received for 5 I0508 21:46:13.935204 6 log.go:172] (0xc0027bf080) Data frame received for 5 I0508 21:46:13.935240 6 log.go:172] (0xc00231a460) (5) Data frame handling I0508 21:46:13.935266 6 log.go:172] (0xc0027bf080) Data frame received for 3 I0508 21:46:13.935281 6 log.go:172] (0xc002965040) (3) Data frame handling I0508 21:46:13.935305 6 log.go:172] (0xc002965040) (3) Data frame sent I0508 21:46:13.935320 6 log.go:172] (0xc0027bf080) Data frame received for 3 I0508 21:46:13.935330 6 log.go:172] (0xc002965040) (3) Data frame handling I0508 21:46:13.937357 6 log.go:172] (0xc0027bf080) Data frame received for 1 I0508 21:46:13.937398 6 log.go:172] (0xc00231a3c0) (1) Data frame handling I0508 21:46:13.937433 6 log.go:172] (0xc00231a3c0) (1) Data frame sent I0508 21:46:13.937572 6 log.go:172] (0xc0027bf080) (0xc00231a3c0) Stream removed, broadcasting: 1 I0508 21:46:13.937653 6 log.go:172] (0xc0027bf080) Go away received I0508 21:46:13.937717 6 log.go:172] (0xc0027bf080) (0xc00231a3c0) Stream removed, broadcasting: 1 I0508 21:46:13.937775 6 log.go:172] (0xc0027bf080) (0xc002965040) Stream removed, broadcasting: 3 I0508 21:46:13.937795 6 log.go:172] (0xc0027bf080) (0xc00231a460) Stream removed, broadcasting: 5 May 8 21:46:13.937: INFO: Exec stderr: "" May 8 21:46:13.937: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6744 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:46:13.937: INFO: >>> kubeConfig: /root/.kube/config I0508 21:46:13.965634 6 log.go:172] (0xc001982fd0) (0xc0029652c0) Create stream I0508 21:46:13.965674 6 log.go:172] (0xc001982fd0) (0xc0029652c0) Stream added, broadcasting: 1 I0508 21:46:13.968342 6 log.go:172] (0xc001982fd0) Reply frame received for 1 I0508 21:46:13.968384 6 log.go:172] (0xc001982fd0) (0xc001a17860) Create stream I0508 21:46:13.968399 6 log.go:172] (0xc001982fd0) (0xc001a17860) Stream added, broadcasting: 3 I0508 21:46:13.970243 6 log.go:172] (0xc001982fd0) Reply frame received for 3 I0508 21:46:13.970288 6 log.go:172] (0xc001982fd0) (0xc0024fa640) Create stream I0508 21:46:13.970305 6 log.go:172] (0xc001982fd0) (0xc0024fa640) Stream added, broadcasting: 5 I0508 21:46:13.971627 6 log.go:172] (0xc001982fd0) Reply frame received for 5 I0508 21:46:14.054514 6 log.go:172] (0xc001982fd0) Data frame received for 5 I0508 21:46:14.054562 6 log.go:172] (0xc0024fa640) (5) Data frame handling I0508 21:46:14.054587 6 log.go:172] (0xc001982fd0) Data frame received for 3 I0508 21:46:14.054607 6 log.go:172] (0xc001a17860) (3) Data frame handling I0508 21:46:14.054630 6 log.go:172] (0xc001a17860) (3) Data frame sent I0508 21:46:14.054652 6 log.go:172] (0xc001982fd0) Data frame received for 3 I0508 21:46:14.054662 6 log.go:172] (0xc001a17860) (3) Data frame handling I0508 21:46:14.056082 6 log.go:172] (0xc001982fd0) Data frame received for 1 I0508 21:46:14.056116 6 log.go:172] (0xc0029652c0) (1) Data frame handling I0508 21:46:14.056131 6 log.go:172] (0xc0029652c0) (1) Data frame sent I0508 21:46:14.056161 6 log.go:172] (0xc001982fd0) (0xc0029652c0) Stream removed, broadcasting: 1 I0508 21:46:14.056198 6 log.go:172] (0xc001982fd0) Go away received I0508 21:46:14.056341 6 log.go:172] (0xc001982fd0) (0xc0029652c0) Stream removed, broadcasting: 1 I0508 21:46:14.056373 6 log.go:172] (0xc001982fd0) (0xc001a17860) Stream removed, broadcasting: 3 I0508 21:46:14.056401 6 log.go:172] (0xc001982fd0) (0xc0024fa640) Stream removed, broadcasting: 5 May 8 21:46:14.056: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 8 21:46:14.056: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6744 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:46:14.056: INFO: >>> kubeConfig: /root/.kube/config I0508 21:46:14.095362 6 log.go:172] (0xc001733290) (0xc0024fabe0) Create stream I0508 21:46:14.095392 6 log.go:172] (0xc001733290) (0xc0024fabe0) Stream added, broadcasting: 1 I0508 21:46:14.097303 6 log.go:172] (0xc001733290) Reply frame received for 1 I0508 21:46:14.097338 6 log.go:172] (0xc001733290) (0xc0024fac80) Create stream I0508 21:46:14.097347 6 log.go:172] (0xc001733290) (0xc0024fac80) Stream added, broadcasting: 3 I0508 21:46:14.098361 6 log.go:172] (0xc001733290) Reply frame received for 3 I0508 21:46:14.098394 6 log.go:172] (0xc001733290) (0xc001a17900) Create stream I0508 21:46:14.098406 6 log.go:172] (0xc001733290) (0xc001a17900) Stream added, broadcasting: 5 I0508 21:46:14.099469 6 log.go:172] (0xc001733290) Reply frame received for 5 I0508 21:46:14.159537 6 log.go:172] (0xc001733290) Data frame received for 3 I0508 21:46:14.159566 6 log.go:172] (0xc0024fac80) (3) Data frame handling I0508 21:46:14.159575 6 log.go:172] (0xc0024fac80) (3) Data frame sent I0508 21:46:14.159662 6 log.go:172] (0xc001733290) Data frame received for 3 I0508 21:46:14.159685 6 log.go:172] (0xc0024fac80) (3) Data frame handling I0508 21:46:14.159710 6 log.go:172] (0xc001733290) Data frame received for 5 I0508 21:46:14.159722 6 log.go:172] (0xc001a17900) (5) Data frame handling I0508 21:46:14.161641 6 log.go:172] (0xc001733290) Data frame received for 1 I0508 21:46:14.161660 6 log.go:172] (0xc0024fabe0) (1) Data frame handling I0508 21:46:14.161669 6 log.go:172] (0xc0024fabe0) (1) Data frame sent I0508 21:46:14.161678 6 log.go:172] (0xc001733290) (0xc0024fabe0) Stream removed, broadcasting: 1 I0508 21:46:14.161694 6 log.go:172] (0xc001733290) Go away received I0508 21:46:14.161848 6 log.go:172] (0xc001733290) (0xc0024fabe0) Stream removed, broadcasting: 1 I0508 21:46:14.161877 6 log.go:172] (0xc001733290) (0xc0024fac80) Stream removed, broadcasting: 3 I0508 21:46:14.161889 6 log.go:172] (0xc001733290) (0xc001a17900) Stream removed, broadcasting: 5 May 8 21:46:14.161: INFO: Exec stderr: "" May 8 21:46:14.161: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6744 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:46:14.161: INFO: >>> kubeConfig: /root/.kube/config I0508 21:46:14.212935 6 log.go:172] (0xc0016c9550) (0xc001ebf5e0) Create stream I0508 21:46:14.212961 6 log.go:172] (0xc0016c9550) (0xc001ebf5e0) Stream added, broadcasting: 1 I0508 21:46:14.214624 6 log.go:172] (0xc0016c9550) Reply frame received for 1 I0508 21:46:14.214652 6 log.go:172] (0xc0016c9550) (0xc001a17ae0) Create stream I0508 21:46:14.214665 6 log.go:172] (0xc0016c9550) (0xc001a17ae0) Stream added, broadcasting: 3 I0508 21:46:14.215441 6 log.go:172] (0xc0016c9550) Reply frame received for 3 I0508 21:46:14.215477 6 log.go:172] (0xc0016c9550) (0xc002965360) Create stream I0508 21:46:14.215491 6 log.go:172] (0xc0016c9550) (0xc002965360) Stream added, broadcasting: 5 I0508 21:46:14.216242 6 log.go:172] (0xc0016c9550) Reply frame received for 5 I0508 21:46:14.273814 6 log.go:172] (0xc0016c9550) Data frame received for 3 I0508 21:46:14.273860 6 log.go:172] (0xc001a17ae0) (3) Data frame handling I0508 21:46:14.273873 6 log.go:172] (0xc001a17ae0) (3) Data frame sent I0508 21:46:14.273887 6 log.go:172] (0xc0016c9550) Data frame received for 3 I0508 21:46:14.273904 6 log.go:172] (0xc001a17ae0) (3) Data frame handling I0508 21:46:14.273930 6 log.go:172] (0xc0016c9550) Data frame received for 5 I0508 21:46:14.273955 6 log.go:172] (0xc002965360) (5) Data frame handling I0508 21:46:14.274817 6 log.go:172] (0xc0016c9550) Data frame received for 1 I0508 21:46:14.274848 6 log.go:172] (0xc001ebf5e0) (1) Data frame handling I0508 21:46:14.274873 6 log.go:172] (0xc001ebf5e0) (1) Data frame sent I0508 21:46:14.274904 6 log.go:172] (0xc0016c9550) (0xc001ebf5e0) Stream removed, broadcasting: 1 I0508 21:46:14.274936 6 log.go:172] (0xc0016c9550) Go away received I0508 21:46:14.275031 6 log.go:172] (0xc0016c9550) (0xc001ebf5e0) Stream removed, broadcasting: 1 I0508 21:46:14.275056 6 log.go:172] (0xc0016c9550) (0xc001a17ae0) Stream removed, broadcasting: 3 I0508 21:46:14.275072 6 log.go:172] (0xc0016c9550) (0xc002965360) Stream removed, broadcasting: 5 May 8 21:46:14.275: INFO: Exec stderr: "" May 8 21:46:14.275: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6744 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:46:14.275: INFO: >>> kubeConfig: /root/.kube/config I0508 21:46:14.308191 6 log.go:172] (0xc001983600) (0xc002965540) Create stream I0508 21:46:14.308226 6 log.go:172] (0xc001983600) (0xc002965540) Stream added, broadcasting: 1 I0508 21:46:14.310357 6 log.go:172] (0xc001983600) Reply frame received for 1 I0508 21:46:14.310422 6 log.go:172] (0xc001983600) (0xc001a17c20) Create stream I0508 21:46:14.310443 6 log.go:172] (0xc001983600) (0xc001a17c20) Stream added, broadcasting: 3 I0508 21:46:14.311364 6 log.go:172] (0xc001983600) Reply frame received for 3 I0508 21:46:14.311392 6 log.go:172] (0xc001983600) (0xc0024fad20) Create stream I0508 21:46:14.311407 6 log.go:172] (0xc001983600) (0xc0024fad20) Stream added, broadcasting: 5 I0508 21:46:14.312203 6 log.go:172] (0xc001983600) Reply frame received for 5 I0508 21:46:14.371596 6 log.go:172] (0xc001983600) Data frame received for 3 I0508 21:46:14.371656 6 log.go:172] (0xc001a17c20) (3) Data frame handling I0508 21:46:14.371672 6 log.go:172] (0xc001a17c20) (3) Data frame sent I0508 21:46:14.371685 6 log.go:172] (0xc001983600) Data frame received for 3 I0508 21:46:14.371701 6 log.go:172] (0xc001a17c20) (3) Data frame handling I0508 21:46:14.371754 6 log.go:172] (0xc001983600) Data frame received for 5 I0508 21:46:14.371788 6 log.go:172] (0xc0024fad20) (5) Data frame handling I0508 21:46:14.373084 6 log.go:172] (0xc001983600) Data frame received for 1 I0508 21:46:14.373103 6 log.go:172] (0xc002965540) (1) Data frame handling I0508 21:46:14.373312 6 log.go:172] (0xc002965540) (1) Data frame sent I0508 21:46:14.373344 6 log.go:172] (0xc001983600) (0xc002965540) Stream removed, broadcasting: 1 I0508 21:46:14.373378 6 log.go:172] (0xc001983600) Go away received I0508 21:46:14.373506 6 log.go:172] (0xc001983600) (0xc002965540) Stream removed, broadcasting: 1 I0508 21:46:14.373533 6 log.go:172] (0xc001983600) (0xc001a17c20) Stream removed, broadcasting: 3 I0508 21:46:14.373553 6 log.go:172] (0xc001983600) (0xc0024fad20) Stream removed, broadcasting: 5 May 8 21:46:14.373: INFO: Exec stderr: "" May 8 21:46:14.373: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6744 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:46:14.373: INFO: >>> kubeConfig: /root/.kube/config I0508 21:46:14.407744 6 log.go:172] (0xc001983c30) (0xc002965720) Create stream I0508 21:46:14.407777 6 log.go:172] (0xc001983c30) (0xc002965720) Stream added, broadcasting: 1 I0508 21:46:14.409820 6 log.go:172] (0xc001983c30) Reply frame received for 1 I0508 21:46:14.409865 6 log.go:172] (0xc001983c30) (0xc001a17cc0) Create stream I0508 21:46:14.409886 6 log.go:172] (0xc001983c30) (0xc001a17cc0) Stream added, broadcasting: 3 I0508 21:46:14.410981 6 log.go:172] (0xc001983c30) Reply frame received for 3 I0508 21:46:14.411025 6 log.go:172] (0xc001983c30) (0xc001a17d60) Create stream I0508 21:46:14.411040 6 log.go:172] (0xc001983c30) (0xc001a17d60) Stream added, broadcasting: 5 I0508 21:46:14.411894 6 log.go:172] (0xc001983c30) Reply frame received for 5 I0508 21:46:14.474553 6 log.go:172] (0xc001983c30) Data frame received for 3 I0508 21:46:14.474591 6 log.go:172] (0xc001a17cc0) (3) Data frame handling I0508 21:46:14.474606 6 log.go:172] (0xc001a17cc0) (3) Data frame sent I0508 21:46:14.474632 6 log.go:172] (0xc001983c30) Data frame received for 3 I0508 21:46:14.474649 6 log.go:172] (0xc001a17cc0) (3) Data frame handling I0508 21:46:14.474685 6 log.go:172] (0xc001983c30) Data frame received for 5 I0508 21:46:14.474721 6 log.go:172] (0xc001a17d60) (5) Data frame handling I0508 21:46:14.476172 6 log.go:172] (0xc001983c30) Data frame received for 1 I0508 21:46:14.476196 6 log.go:172] (0xc002965720) (1) Data frame handling I0508 21:46:14.476224 6 log.go:172] (0xc002965720) (1) Data frame sent I0508 21:46:14.476247 6 log.go:172] (0xc001983c30) (0xc002965720) Stream removed, broadcasting: 1 I0508 21:46:14.476297 6 log.go:172] (0xc001983c30) Go away received I0508 21:46:14.476365 6 log.go:172] (0xc001983c30) (0xc002965720) Stream removed, broadcasting: 1 I0508 21:46:14.476392 6 log.go:172] (0xc001983c30) (0xc001a17cc0) Stream removed, broadcasting: 3 I0508 21:46:14.476423 6 log.go:172] (0xc001983c30) (0xc001a17d60) Stream removed, broadcasting: 5 May 8 21:46:14.476: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:46:14.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6744" for this suite. • [SLOW TEST:11.285 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1568,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:46:14.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 21:46:14.576: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68b1ebd4-3a31-46a3-a311-53b52341110a" in namespace "projected-10" to be "success or failure" May 8 21:46:14.582: INFO: Pod "downwardapi-volume-68b1ebd4-3a31-46a3-a311-53b52341110a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.998253ms May 8 21:46:16.586: INFO: Pod "downwardapi-volume-68b1ebd4-3a31-46a3-a311-53b52341110a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009814767s May 8 21:46:18.591: INFO: Pod "downwardapi-volume-68b1ebd4-3a31-46a3-a311-53b52341110a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014508458s STEP: Saw pod success May 8 21:46:18.591: INFO: Pod "downwardapi-volume-68b1ebd4-3a31-46a3-a311-53b52341110a" satisfied condition "success or failure" May 8 21:46:18.594: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-68b1ebd4-3a31-46a3-a311-53b52341110a container client-container: STEP: delete the pod May 8 21:46:18.802: INFO: Waiting for pod downwardapi-volume-68b1ebd4-3a31-46a3-a311-53b52341110a to disappear May 8 21:46:18.834: INFO: Pod downwardapi-volume-68b1ebd4-3a31-46a3-a311-53b52341110a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:46:18.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-10" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1625,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:46:18.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8908 STEP: creating a selector STEP: Creating the service pods in kubernetes May 8 21:46:18.947: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 8 21:46:43.061: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.56 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8908 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:46:43.061: INFO: >>> kubeConfig: /root/.kube/config I0508 21:46:43.098232 6 log.go:172] (0xc001733a20) (0xc00279c500) Create stream I0508 21:46:43.098280 6 log.go:172] (0xc001733a20) (0xc00279c500) Stream added, broadcasting: 1 I0508 21:46:43.100149 6 log.go:172] (0xc001733a20) Reply frame received for 1 I0508 21:46:43.100214 6 log.go:172] (0xc001733a20) (0xc00279c640) Create stream I0508 21:46:43.100239 6 log.go:172] (0xc001733a20) (0xc00279c640) Stream added, broadcasting: 3 I0508 21:46:43.101288 6 log.go:172] (0xc001733a20) Reply frame received for 3 I0508 21:46:43.101329 6 log.go:172] (0xc001733a20) (0xc00231a500) Create stream I0508 21:46:43.101343 6 log.go:172] (0xc001733a20) (0xc00231a500) Stream added, broadcasting: 5 I0508 21:46:43.102468 6 log.go:172] (0xc001733a20) Reply frame received for 5 I0508 21:46:44.193819 6 log.go:172] (0xc001733a20) Data frame received for 3 I0508 21:46:44.193922 6 log.go:172] (0xc00279c640) (3) Data frame handling I0508 21:46:44.194010 6 log.go:172] (0xc00279c640) (3) Data frame sent I0508 21:46:44.194158 6 log.go:172] (0xc001733a20) Data frame received for 5 I0508 21:46:44.194248 6 log.go:172] (0xc00231a500) (5) Data frame handling I0508 21:46:44.194406 6 log.go:172] (0xc001733a20) Data frame received for 3 I0508 21:46:44.194448 6 log.go:172] (0xc00279c640) (3) Data frame handling I0508 21:46:44.196649 6 log.go:172] (0xc001733a20) Data frame received for 1 I0508 21:46:44.196683 6 log.go:172] (0xc00279c500) (1) Data frame handling I0508 21:46:44.196727 6 log.go:172] (0xc00279c500) (1) Data frame sent I0508 21:46:44.196977 6 log.go:172] (0xc001733a20) (0xc00279c500) Stream removed, broadcasting: 1 I0508 21:46:44.197024 6 log.go:172] (0xc001733a20) Go away received I0508 21:46:44.197327 6 log.go:172] (0xc001733a20) (0xc00279c500) Stream removed, broadcasting: 1 I0508 21:46:44.197365 6 log.go:172] (0xc001733a20) (0xc00279c640) Stream removed, broadcasting: 3 I0508 21:46:44.197387 6 log.go:172] (0xc001733a20) (0xc00231a500) Stream removed, broadcasting: 5 May 8 21:46:44.197: INFO: Found all expected endpoints: [netserver-0] May 8 21:46:44.201: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.202 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8908 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:46:44.201: INFO: >>> kubeConfig: /root/.kube/config I0508 21:46:44.249091 6 log.go:172] (0xc0027bf760) (0xc00231a8c0) Create stream I0508 21:46:44.249238 6 log.go:172] (0xc0027bf760) (0xc00231a8c0) Stream added, broadcasting: 1 I0508 21:46:44.250717 6 log.go:172] (0xc0027bf760) Reply frame received for 1 I0508 21:46:44.250748 6 log.go:172] (0xc0027bf760) (0xc00231a960) Create stream I0508 21:46:44.250761 6 log.go:172] (0xc0027bf760) (0xc00231a960) Stream added, broadcasting: 3 I0508 21:46:44.251509 6 log.go:172] (0xc0027bf760) Reply frame received for 3 I0508 21:46:44.251530 6 log.go:172] (0xc0027bf760) (0xc00280d220) Create stream I0508 21:46:44.251537 6 log.go:172] (0xc0027bf760) (0xc00280d220) Stream added, broadcasting: 5 I0508 21:46:44.252315 6 log.go:172] (0xc0027bf760) Reply frame received for 5 I0508 21:46:45.356570 6 log.go:172] (0xc0027bf760) Data frame received for 3 I0508 21:46:45.356616 6 log.go:172] (0xc00231a960) (3) Data frame handling I0508 21:46:45.356633 6 log.go:172] (0xc00231a960) (3) Data frame sent I0508 21:46:45.356649 6 log.go:172] (0xc0027bf760) Data frame received for 3 I0508 21:46:45.356658 6 log.go:172] (0xc00231a960) (3) Data frame handling I0508 21:46:45.356681 6 log.go:172] (0xc0027bf760) Data frame received for 5 I0508 21:46:45.356692 6 log.go:172] (0xc00280d220) (5) Data frame handling I0508 21:46:45.358525 6 log.go:172] (0xc0027bf760) Data frame received for 1 I0508 21:46:45.358565 6 log.go:172] (0xc00231a8c0) (1) Data frame handling I0508 21:46:45.358591 6 log.go:172] (0xc00231a8c0) (1) Data frame sent I0508 21:46:45.358605 6 log.go:172] (0xc0027bf760) (0xc00231a8c0) Stream removed, broadcasting: 1 I0508 21:46:45.358701 6 log.go:172] (0xc0027bf760) Go away received I0508 21:46:45.358739 6 log.go:172] (0xc0027bf760) (0xc00231a8c0) Stream removed, broadcasting: 1 I0508 21:46:45.358765 6 log.go:172] (0xc0027bf760) (0xc00231a960) Stream removed, broadcasting: 3 I0508 21:46:45.358785 6 log.go:172] (0xc0027bf760) (0xc00280d220) Stream removed, broadcasting: 5 May 8 21:46:45.358: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:46:45.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8908" for this suite. • [SLOW TEST:26.525 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1659,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:46:45.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-cd606d82-d4fc-4ef4-8075-2c46404f6710 STEP: Creating a pod to test consume configMaps May 8 21:46:45.508: INFO: Waiting up to 5m0s for pod "pod-configmaps-c7c6ede4-5e0e-41bb-8c3b-d64ffc77de7a" in namespace "configmap-8746" to be "success or failure" May 8 21:46:45.511: INFO: Pod "pod-configmaps-c7c6ede4-5e0e-41bb-8c3b-d64ffc77de7a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.292672ms May 8 21:46:47.682: INFO: Pod "pod-configmaps-c7c6ede4-5e0e-41bb-8c3b-d64ffc77de7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174517708s May 8 21:46:49.687: INFO: Pod "pod-configmaps-c7c6ede4-5e0e-41bb-8c3b-d64ffc77de7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179384716s May 8 21:46:51.711: INFO: Pod "pod-configmaps-c7c6ede4-5e0e-41bb-8c3b-d64ffc77de7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.202744263s STEP: Saw pod success May 8 21:46:51.711: INFO: Pod "pod-configmaps-c7c6ede4-5e0e-41bb-8c3b-d64ffc77de7a" satisfied condition "success or failure" May 8 21:46:51.714: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c7c6ede4-5e0e-41bb-8c3b-d64ffc77de7a container configmap-volume-test: STEP: delete the pod May 8 21:46:52.104: INFO: Waiting for pod pod-configmaps-c7c6ede4-5e0e-41bb-8c3b-d64ffc77de7a to disappear May 8 21:46:52.329: INFO: Pod pod-configmaps-c7c6ede4-5e0e-41bb-8c3b-d64ffc77de7a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:46:52.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8746" for this suite. • [SLOW TEST:6.969 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1661,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:46:52.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:46:53.186: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"55757ca9-3040-43dd-b5cc-18707e43e608", Controller:(*bool)(0xc0039dccfa), BlockOwnerDeletion:(*bool)(0xc0039dccfb)}} May 8 21:46:53.192: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"22f0cede-6ce7-4307-a1db-9d09463cc9de", Controller:(*bool)(0xc0039dce8a), BlockOwnerDeletion:(*bool)(0xc0039dce8b)}} May 8 21:46:53.243: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"15df6fdc-28c2-45d5-9c24-7ce27440b918", Controller:(*bool)(0xc00263dba2), BlockOwnerDeletion:(*bool)(0xc00263dba3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:46:58.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6363" for this suite. • [SLOW TEST:6.068 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":112,"skipped":1664,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:46:58.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 8 21:46:58.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3948' May 8 21:46:58.825: INFO: stderr: "" May 8 21:46:58.825: INFO: stdout: "pod/pause created\n" May 8 21:46:58.825: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 8 21:46:58.825: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3948" to be "running and ready" May 8 21:46:58.833: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.983338ms May 8 21:47:00.837: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011481388s May 8 21:47:02.841: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.015677398s May 8 21:47:02.841: INFO: Pod "pause" satisfied condition "running and ready" May 8 21:47:02.841: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 8 21:47:02.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3948' May 8 21:47:02.940: INFO: stderr: "" May 8 21:47:02.940: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 8 21:47:02.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3948' May 8 21:47:03.025: INFO: stderr: "" May 8 21:47:03.025: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 8 21:47:03.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3948' May 8 21:47:03.133: INFO: stderr: "" May 8 21:47:03.133: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 8 21:47:03.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3948' May 8 21:47:03.234: INFO: stderr: "" May 8 21:47:03.234: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 8 21:47:03.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3948' May 8 21:47:03.327: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 21:47:03.327: INFO: stdout: "pod \"pause\" force deleted\n" May 8 21:47:03.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3948' May 8 21:47:03.425: INFO: stderr: "No resources found in kubectl-3948 namespace.\n" May 8 21:47:03.425: INFO: stdout: "" May 8 21:47:03.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3948 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 8 21:47:03.667: INFO: stderr: "" May 8 21:47:03.667: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:47:03.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3948" for this suite. • [SLOW TEST:5.485 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":113,"skipped":1677,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:47:03.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 8 21:47:10.361: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8836 PodName:pod-sharedvolume-9034e053-940d-40ca-93e6-8b1bbad9ee9c ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:47:10.361: INFO: >>> kubeConfig: /root/.kube/config I0508 21:47:10.391476 6 log.go:172] (0xc0027bf8c0) (0xc001de06e0) Create stream I0508 21:47:10.391502 6 log.go:172] (0xc0027bf8c0) (0xc001de06e0) Stream added, broadcasting: 1 I0508 21:47:10.392988 6 log.go:172] (0xc0027bf8c0) Reply frame received for 1 I0508 21:47:10.393029 6 log.go:172] (0xc0027bf8c0) (0xc0019a40a0) Create stream I0508 21:47:10.393044 6 log.go:172] (0xc0027bf8c0) (0xc0019a40a0) Stream added, broadcasting: 3 I0508 21:47:10.393974 6 log.go:172] (0xc0027bf8c0) Reply frame received for 3 I0508 21:47:10.394026 6 log.go:172] (0xc0027bf8c0) (0xc001de08c0) Create stream I0508 21:47:10.394041 6 log.go:172] (0xc0027bf8c0) (0xc001de08c0) Stream added, broadcasting: 5 I0508 21:47:10.394807 6 log.go:172] (0xc0027bf8c0) Reply frame received for 5 I0508 21:47:10.487327 6 log.go:172] (0xc0027bf8c0) Data frame received for 5 I0508 21:47:10.487387 6 log.go:172] (0xc001de08c0) (5) Data frame handling I0508 21:47:10.487427 6 log.go:172] (0xc0027bf8c0) Data frame received for 3 I0508 21:47:10.487448 6 log.go:172] (0xc0019a40a0) (3) Data frame handling I0508 21:47:10.487479 6 log.go:172] (0xc0019a40a0) (3) Data frame sent I0508 21:47:10.487502 6 log.go:172] (0xc0027bf8c0) Data frame received for 3 I0508 21:47:10.487522 6 log.go:172] (0xc0019a40a0) (3) Data frame handling I0508 21:47:10.489095 6 log.go:172] (0xc0027bf8c0) Data frame received for 1 I0508 21:47:10.489232 6 log.go:172] (0xc001de06e0) (1) Data frame handling I0508 21:47:10.489263 6 log.go:172] (0xc001de06e0) (1) Data frame sent I0508 21:47:10.489280 6 log.go:172] (0xc0027bf8c0) (0xc001de06e0) Stream removed, broadcasting: 1 I0508 21:47:10.489297 6 log.go:172] (0xc0027bf8c0) Go away received I0508 21:47:10.489445 6 log.go:172] (0xc0027bf8c0) (0xc001de06e0) Stream removed, broadcasting: 1 I0508 21:47:10.489469 6 log.go:172] (0xc0027bf8c0) (0xc0019a40a0) Stream removed, broadcasting: 3 I0508 21:47:10.489481 6 log.go:172] (0xc0027bf8c0) (0xc001de08c0) Stream removed, broadcasting: 5 May 8 21:47:10.489: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:47:10.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8836" for this suite. • [SLOW TEST:6.628 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":114,"skipped":1707,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:47:10.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:47:14.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9872" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:47:14.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 21:47:15.642: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 21:47:17.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571235, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571235, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571235, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571235, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 21:47:20.690: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:47:21.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3408" for this suite. STEP: Destroying namespace "webhook-3408-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.645 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":116,"skipped":1743,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:47:21.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 8 21:47:21.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-7687' May 8 21:47:21.507: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 8 21:47:21.507: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 8 21:47:23.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7687' May 8 21:47:23.742: INFO: stderr: "" May 8 21:47:23.742: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:47:23.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7687" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":117,"skipped":1782,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:47:23.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:47:57.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6447" for this suite. • [SLOW TEST:33.481 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1803,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:47:57.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:48:14.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3225" for this suite. • [SLOW TEST:17.127 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":119,"skipped":1805,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:48:14.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 8 21:48:14.571: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5754" to be "success or failure" May 8 21:48:14.575: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.40486ms May 8 21:48:16.578: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007081869s May 8 21:48:18.582: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011004811s May 8 21:48:20.587: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015222181s STEP: Saw pod success May 8 21:48:20.587: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 8 21:48:20.590: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 8 21:48:20.647: INFO: Waiting for pod pod-host-path-test to disappear May 8 21:48:20.656: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:48:20.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5754" for this suite. • [SLOW TEST:6.167 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1819,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:48:20.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 8 21:48:25.250: INFO: Successfully updated pod "pod-update-activedeadlineseconds-1d1639b3-7d94-41ee-aca8-26e0122260cf" May 8 21:48:25.250: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-1d1639b3-7d94-41ee-aca8-26e0122260cf" in namespace "pods-7717" to be "terminated due to deadline exceeded" May 8 21:48:25.260: INFO: Pod "pod-update-activedeadlineseconds-1d1639b3-7d94-41ee-aca8-26e0122260cf": Phase="Running", Reason="", readiness=true. Elapsed: 10.659974ms May 8 21:48:27.265: INFO: Pod "pod-update-activedeadlineseconds-1d1639b3-7d94-41ee-aca8-26e0122260cf": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.014896231s May 8 21:48:27.265: INFO: Pod "pod-update-activedeadlineseconds-1d1639b3-7d94-41ee-aca8-26e0122260cf" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:48:27.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7717" for this suite. • [SLOW TEST:6.610 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1902,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:48:27.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 8 21:48:27.338: INFO: Waiting up to 5m0s for pod "downward-api-cf94b83e-df13-46da-9ed6-16b6d1ab050a" in namespace "downward-api-6376" to be "success or failure" May 8 21:48:27.364: INFO: Pod "downward-api-cf94b83e-df13-46da-9ed6-16b6d1ab050a": Phase="Pending", Reason="", readiness=false. Elapsed: 25.552346ms May 8 21:48:29.396: INFO: Pod "downward-api-cf94b83e-df13-46da-9ed6-16b6d1ab050a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057726951s May 8 21:48:31.400: INFO: Pod "downward-api-cf94b83e-df13-46da-9ed6-16b6d1ab050a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061418909s STEP: Saw pod success May 8 21:48:31.400: INFO: Pod "downward-api-cf94b83e-df13-46da-9ed6-16b6d1ab050a" satisfied condition "success or failure" May 8 21:48:31.402: INFO: Trying to get logs from node jerma-worker pod downward-api-cf94b83e-df13-46da-9ed6-16b6d1ab050a container dapi-container: STEP: delete the pod May 8 21:48:31.508: INFO: Waiting for pod downward-api-cf94b83e-df13-46da-9ed6-16b6d1ab050a to disappear May 8 21:48:31.636: INFO: Pod downward-api-cf94b83e-df13-46da-9ed6-16b6d1ab050a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:48:31.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6376" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1925,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:48:31.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:48:35.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2411" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1950,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:48:35.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 21:48:36.731: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 21:48:38.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571316, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571316, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571316, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571316, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 21:48:40.747: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571316, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571316, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571316, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571316, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 21:48:43.771: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:48:43.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6782" for this suite. STEP: Destroying namespace "webhook-6782-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.160 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":124,"skipped":1977,"failed":0} SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:48:43.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-3473 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3473 to expose endpoints map[] May 8 21:48:44.115: INFO: Get endpoints failed (47.768874ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 8 21:48:45.118: INFO: successfully validated that service multi-endpoint-test in namespace services-3473 exposes endpoints map[] (1.051031016s elapsed) STEP: Creating pod pod1 in namespace services-3473 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3473 to expose endpoints map[pod1:[100]] May 8 21:48:49.298: INFO: successfully validated that service multi-endpoint-test in namespace services-3473 exposes endpoints map[pod1:[100]] (4.173019613s elapsed) STEP: Creating pod pod2 in namespace services-3473 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3473 to expose endpoints map[pod1:[100] pod2:[101]] May 8 21:48:52.533: INFO: successfully validated that service multi-endpoint-test in namespace services-3473 exposes endpoints map[pod1:[100] pod2:[101]] (3.23104438s elapsed) STEP: Deleting pod pod1 in namespace services-3473 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3473 to expose endpoints map[pod2:[101]] May 8 21:48:53.619: INFO: successfully validated that service multi-endpoint-test in namespace services-3473 exposes endpoints map[pod2:[101]] (1.083192084s elapsed) STEP: Deleting pod pod2 in namespace services-3473 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3473 to expose endpoints map[] May 8 21:48:54.636: INFO: successfully validated that service multi-endpoint-test in namespace services-3473 exposes endpoints map[] (1.011771353s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:48:54.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3473" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.715 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":125,"skipped":1979,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:48:54.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 8 21:48:54.750: INFO: Waiting up to 5m0s for pod "pod-042c6b36-9fb4-4464-ab1c-4c44b4a33ad3" in namespace "emptydir-7339" to be "success or failure" May 8 21:48:54.752: INFO: Pod "pod-042c6b36-9fb4-4464-ab1c-4c44b4a33ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 1.617595ms May 8 21:48:56.756: INFO: Pod "pod-042c6b36-9fb4-4464-ab1c-4c44b4a33ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005735625s May 8 21:48:58.760: INFO: Pod "pod-042c6b36-9fb4-4464-ab1c-4c44b4a33ad3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009778158s STEP: Saw pod success May 8 21:48:58.760: INFO: Pod "pod-042c6b36-9fb4-4464-ab1c-4c44b4a33ad3" satisfied condition "success or failure" May 8 21:48:58.763: INFO: Trying to get logs from node jerma-worker2 pod pod-042c6b36-9fb4-4464-ab1c-4c44b4a33ad3 container test-container: STEP: delete the pod May 8 21:48:58.815: INFO: Waiting for pod pod-042c6b36-9fb4-4464-ab1c-4c44b4a33ad3 to disappear May 8 21:48:58.824: INFO: Pod pod-042c6b36-9fb4-4464-ab1c-4c44b4a33ad3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:48:58.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7339" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":1997,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:48:58.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-5a163056-e76e-49e7-9ed1-96e22b5d55d9 in namespace container-probe-3059 May 8 21:49:02.955: INFO: Started pod busybox-5a163056-e76e-49e7-9ed1-96e22b5d55d9 in namespace container-probe-3059 STEP: checking the pod's current state and verifying that restartCount is present May 8 21:49:02.957: INFO: Initial restart count of pod busybox-5a163056-e76e-49e7-9ed1-96e22b5d55d9 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:53:03.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3059" for this suite. • [SLOW TEST:244.720 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2010,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:53:03.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 21:53:03.694: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee85710e-719f-4c8f-87a8-fc2a3449dd0d" in namespace "projected-3069" to be "success or failure" May 8 21:53:03.709: INFO: Pod "downwardapi-volume-ee85710e-719f-4c8f-87a8-fc2a3449dd0d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.304714ms May 8 21:53:05.741: INFO: Pod "downwardapi-volume-ee85710e-719f-4c8f-87a8-fc2a3449dd0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047613793s May 8 21:53:07.746: INFO: Pod "downwardapi-volume-ee85710e-719f-4c8f-87a8-fc2a3449dd0d": Phase="Running", Reason="", readiness=true. Elapsed: 4.051794862s May 8 21:53:09.750: INFO: Pod "downwardapi-volume-ee85710e-719f-4c8f-87a8-fc2a3449dd0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056070587s STEP: Saw pod success May 8 21:53:09.750: INFO: Pod "downwardapi-volume-ee85710e-719f-4c8f-87a8-fc2a3449dd0d" satisfied condition "success or failure" May 8 21:53:09.753: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ee85710e-719f-4c8f-87a8-fc2a3449dd0d container client-container: STEP: delete the pod May 8 21:53:09.803: INFO: Waiting for pod downwardapi-volume-ee85710e-719f-4c8f-87a8-fc2a3449dd0d to disappear May 8 21:53:09.829: INFO: Pod downwardapi-volume-ee85710e-719f-4c8f-87a8-fc2a3449dd0d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:53:09.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3069" for this suite. • [SLOW TEST:6.258 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2016,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:53:09.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-c86e8836-8eb3-4ada-8738-704bf40241a2 STEP: Creating a pod to test consume configMaps May 8 21:53:09.933: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b5eb6562-97b3-49d6-852c-13d93abc9320" in namespace "projected-2238" to be "success or failure" May 8 21:53:09.936: INFO: Pod "pod-projected-configmaps-b5eb6562-97b3-49d6-852c-13d93abc9320": Phase="Pending", Reason="", readiness=false. Elapsed: 3.190681ms May 8 21:53:11.973: INFO: Pod "pod-projected-configmaps-b5eb6562-97b3-49d6-852c-13d93abc9320": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040139523s May 8 21:53:13.977: INFO: Pod "pod-projected-configmaps-b5eb6562-97b3-49d6-852c-13d93abc9320": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044092767s STEP: Saw pod success May 8 21:53:13.977: INFO: Pod "pod-projected-configmaps-b5eb6562-97b3-49d6-852c-13d93abc9320" satisfied condition "success or failure" May 8 21:53:13.980: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-b5eb6562-97b3-49d6-852c-13d93abc9320 container projected-configmap-volume-test: STEP: delete the pod May 8 21:53:14.017: INFO: Waiting for pod pod-projected-configmaps-b5eb6562-97b3-49d6-852c-13d93abc9320 to disappear May 8 21:53:14.032: INFO: Pod pod-projected-configmaps-b5eb6562-97b3-49d6-852c-13d93abc9320 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:53:14.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2238" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2019,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:53:14.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6544 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6544 I0508 21:53:14.159885 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6544, replica count: 2 I0508 21:53:17.210291 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 21:53:20.210539 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 8 21:53:20.210: INFO: Creating new exec pod May 8 21:53:25.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6544 execpodsgpsj -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 8 21:53:28.290: INFO: stderr: "I0508 21:53:28.208254 1879 log.go:172] (0xc000114dc0) (0xc000843e00) Create stream\nI0508 21:53:28.208321 1879 log.go:172] (0xc000114dc0) (0xc000843e00) Stream added, broadcasting: 1\nI0508 21:53:28.221806 1879 log.go:172] (0xc000114dc0) Reply frame received for 1\nI0508 21:53:28.221854 1879 log.go:172] (0xc000114dc0) (0xc000765680) Create stream\nI0508 21:53:28.221865 1879 log.go:172] (0xc000114dc0) (0xc000765680) Stream added, broadcasting: 3\nI0508 21:53:28.222959 1879 log.go:172] (0xc000114dc0) Reply frame received for 3\nI0508 21:53:28.223033 1879 log.go:172] (0xc000114dc0) (0xc000328000) Create stream\nI0508 21:53:28.223071 1879 log.go:172] (0xc000114dc0) (0xc000328000) Stream added, broadcasting: 5\nI0508 21:53:28.224555 1879 log.go:172] (0xc000114dc0) Reply frame received for 5\nI0508 21:53:28.283472 1879 log.go:172] (0xc000114dc0) Data frame received for 5\nI0508 21:53:28.283508 1879 log.go:172] (0xc000328000) (5) Data frame handling\nI0508 21:53:28.283528 1879 log.go:172] (0xc000328000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0508 21:53:28.283787 1879 log.go:172] (0xc000114dc0) Data frame received for 5\nI0508 21:53:28.283815 1879 log.go:172] (0xc000328000) (5) Data frame handling\nI0508 21:53:28.283845 1879 log.go:172] (0xc000328000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0508 21:53:28.284061 1879 log.go:172] (0xc000114dc0) Data frame received for 5\nI0508 21:53:28.284164 1879 log.go:172] (0xc000328000) (5) Data frame handling\nI0508 21:53:28.284215 1879 log.go:172] (0xc000114dc0) Data frame received for 3\nI0508 21:53:28.284245 1879 log.go:172] (0xc000765680) (3) Data frame handling\nI0508 21:53:28.285938 1879 log.go:172] (0xc000114dc0) Data frame received for 1\nI0508 21:53:28.285952 1879 log.go:172] (0xc000843e00) (1) Data frame handling\nI0508 21:53:28.285959 1879 log.go:172] (0xc000843e00) (1) Data frame sent\nI0508 21:53:28.285977 1879 log.go:172] (0xc000114dc0) (0xc000843e00) Stream removed, broadcasting: 1\nI0508 21:53:28.286008 1879 log.go:172] (0xc000114dc0) Go away received\nI0508 21:53:28.286269 1879 log.go:172] (0xc000114dc0) (0xc000843e00) Stream removed, broadcasting: 1\nI0508 21:53:28.286281 1879 log.go:172] (0xc000114dc0) (0xc000765680) Stream removed, broadcasting: 3\nI0508 21:53:28.286287 1879 log.go:172] (0xc000114dc0) (0xc000328000) Stream removed, broadcasting: 5\n" May 8 21:53:28.290: INFO: stdout: "" May 8 21:53:28.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6544 execpodsgpsj -- /bin/sh -x -c nc -zv -t -w 2 10.104.14.121 80' May 8 21:53:28.486: INFO: stderr: "I0508 21:53:28.417677 1907 log.go:172] (0xc0000f4dc0) (0xc0006139a0) Create stream\nI0508 21:53:28.417734 1907 log.go:172] (0xc0000f4dc0) (0xc0006139a0) Stream added, broadcasting: 1\nI0508 21:53:28.420469 1907 log.go:172] (0xc0000f4dc0) Reply frame received for 1\nI0508 21:53:28.420520 1907 log.go:172] (0xc0000f4dc0) (0xc00072c000) Create stream\nI0508 21:53:28.420539 1907 log.go:172] (0xc0000f4dc0) (0xc00072c000) Stream added, broadcasting: 3\nI0508 21:53:28.421525 1907 log.go:172] (0xc0000f4dc0) Reply frame received for 3\nI0508 21:53:28.421561 1907 log.go:172] (0xc0000f4dc0) (0xc000613b80) Create stream\nI0508 21:53:28.421571 1907 log.go:172] (0xc0000f4dc0) (0xc000613b80) Stream added, broadcasting: 5\nI0508 21:53:28.422426 1907 log.go:172] (0xc0000f4dc0) Reply frame received for 5\nI0508 21:53:28.479343 1907 log.go:172] (0xc0000f4dc0) Data frame received for 5\nI0508 21:53:28.479392 1907 log.go:172] (0xc0000f4dc0) Data frame received for 3\nI0508 21:53:28.479428 1907 log.go:172] (0xc00072c000) (3) Data frame handling\nI0508 21:53:28.479454 1907 log.go:172] (0xc000613b80) (5) Data frame handling\nI0508 21:53:28.479474 1907 log.go:172] (0xc000613b80) (5) Data frame sent\nI0508 21:53:28.479488 1907 log.go:172] (0xc0000f4dc0) Data frame received for 5\nI0508 21:53:28.479500 1907 log.go:172] (0xc000613b80) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.14.121 80\nConnection to 10.104.14.121 80 port [tcp/http] succeeded!\nI0508 21:53:28.480957 1907 log.go:172] (0xc0000f4dc0) Data frame received for 1\nI0508 21:53:28.480968 1907 log.go:172] (0xc0006139a0) (1) Data frame handling\nI0508 21:53:28.480976 1907 log.go:172] (0xc0006139a0) (1) Data frame sent\nI0508 21:53:28.480984 1907 log.go:172] (0xc0000f4dc0) (0xc0006139a0) Stream removed, broadcasting: 1\nI0508 21:53:28.481356 1907 log.go:172] (0xc0000f4dc0) (0xc0006139a0) Stream removed, broadcasting: 1\nI0508 21:53:28.481371 1907 log.go:172] (0xc0000f4dc0) (0xc00072c000) Stream removed, broadcasting: 3\nI0508 21:53:28.481573 1907 log.go:172] (0xc0000f4dc0) Go away received\nI0508 21:53:28.481628 1907 log.go:172] (0xc0000f4dc0) (0xc000613b80) Stream removed, broadcasting: 5\n" May 8 21:53:28.486: INFO: stdout: "" May 8 21:53:28.486: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:53:28.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6544" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.511 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":130,"skipped":2041,"failed":0} [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:53:28.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 8 21:53:28.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5135' May 8 21:53:28.888: INFO: stderr: "" May 8 21:53:28.888: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 21:53:28.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5135' May 8 21:53:28.998: INFO: stderr: "" May 8 21:53:28.998: INFO: stdout: "update-demo-nautilus-5j86k update-demo-nautilus-6r5js " May 8 21:53:28.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j86k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5135' May 8 21:53:29.090: INFO: stderr: "" May 8 21:53:29.090: INFO: stdout: "" May 8 21:53:29.090: INFO: update-demo-nautilus-5j86k is created but not running May 8 21:53:34.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5135' May 8 21:53:34.308: INFO: stderr: "" May 8 21:53:34.308: INFO: stdout: "update-demo-nautilus-5j86k update-demo-nautilus-6r5js " May 8 21:53:34.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j86k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5135' May 8 21:53:34.421: INFO: stderr: "" May 8 21:53:34.421: INFO: stdout: "true" May 8 21:53:34.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j86k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5135' May 8 21:53:34.511: INFO: stderr: "" May 8 21:53:34.511: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 21:53:34.511: INFO: validating pod update-demo-nautilus-5j86k May 8 21:53:34.570: INFO: got data: { "image": "nautilus.jpg" } May 8 21:53:34.570: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 21:53:34.570: INFO: update-demo-nautilus-5j86k is verified up and running May 8 21:53:34.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r5js -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5135' May 8 21:53:34.661: INFO: stderr: "" May 8 21:53:34.661: INFO: stdout: "true" May 8 21:53:34.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r5js -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5135' May 8 21:53:34.756: INFO: stderr: "" May 8 21:53:34.756: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 21:53:34.756: INFO: validating pod update-demo-nautilus-6r5js May 8 21:53:34.760: INFO: got data: { "image": "nautilus.jpg" } May 8 21:53:34.760: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 21:53:34.760: INFO: update-demo-nautilus-6r5js is verified up and running STEP: rolling-update to new replication controller May 8 21:53:34.762: INFO: scanned /root for discovery docs: May 8 21:53:34.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5135' May 8 21:53:57.323: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 8 21:53:57.323: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 21:53:57.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5135' May 8 21:53:57.412: INFO: stderr: "" May 8 21:53:57.412: INFO: stdout: "update-demo-kitten-h65nj update-demo-kitten-x7859 update-demo-nautilus-5j86k " STEP: Replicas for name=update-demo: expected=2 actual=3 May 8 21:54:02.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5135' May 8 21:54:02.525: INFO: stderr: "" May 8 21:54:02.525: INFO: stdout: "update-demo-kitten-h65nj update-demo-kitten-x7859 " May 8 21:54:02.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h65nj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5135' May 8 21:54:02.621: INFO: stderr: "" May 8 21:54:02.621: INFO: stdout: "true" May 8 21:54:02.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h65nj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5135' May 8 21:54:02.713: INFO: stderr: "" May 8 21:54:02.713: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 8 21:54:02.713: INFO: validating pod update-demo-kitten-h65nj May 8 21:54:02.718: INFO: got data: { "image": "kitten.jpg" } May 8 21:54:02.718: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 8 21:54:02.718: INFO: update-demo-kitten-h65nj is verified up and running May 8 21:54:02.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-x7859 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5135' May 8 21:54:02.818: INFO: stderr: "" May 8 21:54:02.818: INFO: stdout: "true" May 8 21:54:02.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-x7859 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5135' May 8 21:54:02.916: INFO: stderr: "" May 8 21:54:02.916: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 8 21:54:02.916: INFO: validating pod update-demo-kitten-x7859 May 8 21:54:02.920: INFO: got data: { "image": "kitten.jpg" } May 8 21:54:02.920: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 8 21:54:02.920: INFO: update-demo-kitten-x7859 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:54:02.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5135" for this suite. • [SLOW TEST:34.377 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":131,"skipped":2041,"failed":0} [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:54:02.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 21:54:03.067: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66438166-f62a-4703-8807-e6d8380c6984" in namespace "downward-api-6394" to be "success or failure" May 8 21:54:03.082: INFO: Pod "downwardapi-volume-66438166-f62a-4703-8807-e6d8380c6984": Phase="Pending", Reason="", readiness=false. Elapsed: 15.442044ms May 8 21:54:05.086: INFO: Pod "downwardapi-volume-66438166-f62a-4703-8807-e6d8380c6984": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019238832s May 8 21:54:07.090: INFO: Pod "downwardapi-volume-66438166-f62a-4703-8807-e6d8380c6984": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023489294s STEP: Saw pod success May 8 21:54:07.090: INFO: Pod "downwardapi-volume-66438166-f62a-4703-8807-e6d8380c6984" satisfied condition "success or failure" May 8 21:54:07.092: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-66438166-f62a-4703-8807-e6d8380c6984 container client-container: STEP: delete the pod May 8 21:54:07.331: INFO: Waiting for pod downwardapi-volume-66438166-f62a-4703-8807-e6d8380c6984 to disappear May 8 21:54:07.346: INFO: Pod downwardapi-volume-66438166-f62a-4703-8807-e6d8380c6984 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:54:07.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6394" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2041,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:54:07.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 8 21:54:07.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4436' May 8 21:54:07.690: INFO: stderr: "" May 8 21:54:07.690: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 21:54:07.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4436' May 8 21:54:07.808: INFO: stderr: "" May 8 21:54:07.808: INFO: stdout: "update-demo-nautilus-g2tsd update-demo-nautilus-v2pvn " May 8 21:54:07.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2tsd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4436' May 8 21:54:07.896: INFO: stderr: "" May 8 21:54:07.896: INFO: stdout: "" May 8 21:54:07.896: INFO: update-demo-nautilus-g2tsd is created but not running May 8 21:54:12.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4436' May 8 21:54:13.008: INFO: stderr: "" May 8 21:54:13.008: INFO: stdout: "update-demo-nautilus-g2tsd update-demo-nautilus-v2pvn " May 8 21:54:13.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2tsd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4436' May 8 21:54:13.101: INFO: stderr: "" May 8 21:54:13.101: INFO: stdout: "true" May 8 21:54:13.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2tsd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4436' May 8 21:54:13.203: INFO: stderr: "" May 8 21:54:13.203: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 21:54:13.203: INFO: validating pod update-demo-nautilus-g2tsd May 8 21:54:13.207: INFO: got data: { "image": "nautilus.jpg" } May 8 21:54:13.208: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 21:54:13.208: INFO: update-demo-nautilus-g2tsd is verified up and running May 8 21:54:13.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v2pvn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4436' May 8 21:54:13.294: INFO: stderr: "" May 8 21:54:13.294: INFO: stdout: "true" May 8 21:54:13.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v2pvn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4436' May 8 21:54:13.379: INFO: stderr: "" May 8 21:54:13.379: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 21:54:13.379: INFO: validating pod update-demo-nautilus-v2pvn May 8 21:54:13.383: INFO: got data: { "image": "nautilus.jpg" } May 8 21:54:13.383: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 21:54:13.383: INFO: update-demo-nautilus-v2pvn is verified up and running STEP: scaling down the replication controller May 8 21:54:13.385: INFO: scanned /root for discovery docs: May 8 21:54:13.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4436' May 8 21:54:14.500: INFO: stderr: "" May 8 21:54:14.500: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 21:54:14.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4436' May 8 21:54:14.600: INFO: stderr: "" May 8 21:54:14.600: INFO: stdout: "update-demo-nautilus-g2tsd update-demo-nautilus-v2pvn " STEP: Replicas for name=update-demo: expected=1 actual=2 May 8 21:54:19.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4436' May 8 21:54:19.716: INFO: stderr: "" May 8 21:54:19.716: INFO: stdout: "update-demo-nautilus-g2tsd " May 8 21:54:19.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2tsd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4436' May 8 21:54:19.800: INFO: stderr: "" May 8 21:54:19.800: INFO: stdout: "true" May 8 21:54:19.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2tsd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4436' May 8 21:54:19.889: INFO: stderr: "" May 8 21:54:19.889: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 21:54:19.889: INFO: validating pod update-demo-nautilus-g2tsd May 8 21:54:19.892: INFO: got data: { "image": "nautilus.jpg" } May 8 21:54:19.892: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 21:54:19.892: INFO: update-demo-nautilus-g2tsd is verified up and running STEP: scaling up the replication controller May 8 21:54:19.894: INFO: scanned /root for discovery docs: May 8 21:54:19.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4436' May 8 21:54:21.026: INFO: stderr: "" May 8 21:54:21.026: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 21:54:21.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4436' May 8 21:54:21.131: INFO: stderr: "" May 8 21:54:21.131: INFO: stdout: "update-demo-nautilus-8cxdl update-demo-nautilus-g2tsd " May 8 21:54:21.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8cxdl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4436' May 8 21:54:21.216: INFO: stderr: "" May 8 21:54:21.216: INFO: stdout: "" May 8 21:54:21.216: INFO: update-demo-nautilus-8cxdl is created but not running May 8 21:54:26.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4436' May 8 21:54:26.326: INFO: stderr: "" May 8 21:54:26.326: INFO: stdout: "update-demo-nautilus-8cxdl update-demo-nautilus-g2tsd " May 8 21:54:26.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8cxdl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4436' May 8 21:54:26.418: INFO: stderr: "" May 8 21:54:26.418: INFO: stdout: "true" May 8 21:54:26.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8cxdl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4436' May 8 21:54:26.521: INFO: stderr: "" May 8 21:54:26.521: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 21:54:26.521: INFO: validating pod update-demo-nautilus-8cxdl May 8 21:54:26.525: INFO: got data: { "image": "nautilus.jpg" } May 8 21:54:26.525: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 21:54:26.525: INFO: update-demo-nautilus-8cxdl is verified up and running May 8 21:54:26.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2tsd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4436' May 8 21:54:26.615: INFO: stderr: "" May 8 21:54:26.615: INFO: stdout: "true" May 8 21:54:26.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2tsd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4436' May 8 21:54:26.703: INFO: stderr: "" May 8 21:54:26.703: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 21:54:26.703: INFO: validating pod update-demo-nautilus-g2tsd May 8 21:54:26.706: INFO: got data: { "image": "nautilus.jpg" } May 8 21:54:26.706: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 21:54:26.706: INFO: update-demo-nautilus-g2tsd is verified up and running STEP: using delete to clean up resources May 8 21:54:26.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4436' May 8 21:54:26.805: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 21:54:26.805: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 8 21:54:26.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4436' May 8 21:54:26.921: INFO: stderr: "No resources found in kubectl-4436 namespace.\n" May 8 21:54:26.921: INFO: stdout: "" May 8 21:54:26.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4436 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 8 21:54:27.022: INFO: stderr: "" May 8 21:54:27.022: INFO: stdout: "update-demo-nautilus-8cxdl\nupdate-demo-nautilus-g2tsd\n" May 8 21:54:27.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4436' May 8 21:54:27.625: INFO: stderr: "No resources found in kubectl-4436 namespace.\n" May 8 21:54:27.625: INFO: stdout: "" May 8 21:54:27.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4436 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 8 21:54:27.717: INFO: stderr: "" May 8 21:54:27.717: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:54:27.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4436" for this suite. • [SLOW TEST:20.392 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":133,"skipped":2055,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:54:27.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9398 STEP: creating a selector STEP: Creating the service pods in kubernetes May 8 21:54:28.121: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 8 21:54:50.291: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.79:8080/dial?request=hostname&protocol=http&host=10.244.1.78&port=8080&tries=1'] Namespace:pod-network-test-9398 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:54:50.291: INFO: >>> kubeConfig: /root/.kube/config I0508 21:54:50.324308 6 log.go:172] (0xc001982a50) (0xc0022a46e0) Create stream I0508 21:54:50.324349 6 log.go:172] (0xc001982a50) (0xc0022a46e0) Stream added, broadcasting: 1 I0508 21:54:50.326673 6 log.go:172] (0xc001982a50) Reply frame received for 1 I0508 21:54:50.326719 6 log.go:172] (0xc001982a50) (0xc0005c88c0) Create stream I0508 21:54:50.326736 6 log.go:172] (0xc001982a50) (0xc0005c88c0) Stream added, broadcasting: 3 I0508 21:54:50.327882 6 log.go:172] (0xc001982a50) Reply frame received for 3 I0508 21:54:50.327913 6 log.go:172] (0xc001982a50) (0xc0005c8b40) Create stream I0508 21:54:50.327924 6 log.go:172] (0xc001982a50) (0xc0005c8b40) Stream added, broadcasting: 5 I0508 21:54:50.328919 6 log.go:172] (0xc001982a50) Reply frame received for 5 I0508 21:54:50.397496 6 log.go:172] (0xc001982a50) Data frame received for 3 I0508 21:54:50.397542 6 log.go:172] (0xc0005c88c0) (3) Data frame handling I0508 21:54:50.397564 6 log.go:172] (0xc0005c88c0) (3) Data frame sent I0508 21:54:50.397747 6 log.go:172] (0xc001982a50) Data frame received for 5 I0508 21:54:50.397786 6 log.go:172] (0xc0005c8b40) (5) Data frame handling I0508 21:54:50.397934 6 log.go:172] (0xc001982a50) Data frame received for 3 I0508 21:54:50.397964 6 log.go:172] (0xc0005c88c0) (3) Data frame handling I0508 21:54:50.400158 6 log.go:172] (0xc001982a50) Data frame received for 1 I0508 21:54:50.400212 6 log.go:172] (0xc0022a46e0) (1) Data frame handling I0508 21:54:50.400244 6 log.go:172] (0xc0022a46e0) (1) Data frame sent I0508 21:54:50.400263 6 log.go:172] (0xc001982a50) (0xc0022a46e0) Stream removed, broadcasting: 1 I0508 21:54:50.400298 6 log.go:172] (0xc001982a50) Go away received I0508 21:54:50.400597 6 log.go:172] (0xc001982a50) (0xc0022a46e0) Stream removed, broadcasting: 1 I0508 21:54:50.400638 6 log.go:172] (0xc001982a50) (0xc0005c88c0) Stream removed, broadcasting: 3 I0508 21:54:50.400677 6 log.go:172] (0xc001982a50) (0xc0005c8b40) Stream removed, broadcasting: 5 May 8 21:54:50.400: INFO: Waiting for responses: map[] May 8 21:54:50.404: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.79:8080/dial?request=hostname&protocol=http&host=10.244.2.217&port=8080&tries=1'] Namespace:pod-network-test-9398 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 21:54:50.404: INFO: >>> kubeConfig: /root/.kube/config I0508 21:54:50.433296 6 log.go:172] (0xc001983080) (0xc0022a4960) Create stream I0508 21:54:50.433331 6 log.go:172] (0xc001983080) (0xc0022a4960) Stream added, broadcasting: 1 I0508 21:54:50.435015 6 log.go:172] (0xc001983080) Reply frame received for 1 I0508 21:54:50.435039 6 log.go:172] (0xc001983080) (0xc0022a4a00) Create stream I0508 21:54:50.435047 6 log.go:172] (0xc001983080) (0xc0022a4a00) Stream added, broadcasting: 3 I0508 21:54:50.435885 6 log.go:172] (0xc001983080) Reply frame received for 3 I0508 21:54:50.435938 6 log.go:172] (0xc001983080) (0xc001aac960) Create stream I0508 21:54:50.435959 6 log.go:172] (0xc001983080) (0xc001aac960) Stream added, broadcasting: 5 I0508 21:54:50.436837 6 log.go:172] (0xc001983080) Reply frame received for 5 I0508 21:54:50.502201 6 log.go:172] (0xc001983080) Data frame received for 3 I0508 21:54:50.502228 6 log.go:172] (0xc0022a4a00) (3) Data frame handling I0508 21:54:50.502250 6 log.go:172] (0xc0022a4a00) (3) Data frame sent I0508 21:54:50.503107 6 log.go:172] (0xc001983080) Data frame received for 5 I0508 21:54:50.503133 6 log.go:172] (0xc001aac960) (5) Data frame handling I0508 21:54:50.503325 6 log.go:172] (0xc001983080) Data frame received for 3 I0508 21:54:50.503341 6 log.go:172] (0xc0022a4a00) (3) Data frame handling I0508 21:54:50.505324 6 log.go:172] (0xc001983080) Data frame received for 1 I0508 21:54:50.505349 6 log.go:172] (0xc0022a4960) (1) Data frame handling I0508 21:54:50.505391 6 log.go:172] (0xc0022a4960) (1) Data frame sent I0508 21:54:50.505408 6 log.go:172] (0xc001983080) (0xc0022a4960) Stream removed, broadcasting: 1 I0508 21:54:50.505501 6 log.go:172] (0xc001983080) (0xc0022a4960) Stream removed, broadcasting: 1 I0508 21:54:50.505524 6 log.go:172] (0xc001983080) (0xc0022a4a00) Stream removed, broadcasting: 3 I0508 21:54:50.505581 6 log.go:172] (0xc001983080) Go away received I0508 21:54:50.505711 6 log.go:172] (0xc001983080) (0xc001aac960) Stream removed, broadcasting: 5 May 8 21:54:50.505: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:54:50.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9398" for this suite. • [SLOW TEST:22.764 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2117,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:54:50.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-6be862d6-65d2-403e-ad6b-71d4686ba14f STEP: Creating a pod to test consume configMaps May 8 21:54:50.647: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-42db0479-9c7d-4474-9e12-ece38fb56df7" in namespace "projected-579" to be "success or failure" May 8 21:54:50.652: INFO: Pod "pod-projected-configmaps-42db0479-9c7d-4474-9e12-ece38fb56df7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.754781ms May 8 21:54:52.665: INFO: Pod "pod-projected-configmaps-42db0479-9c7d-4474-9e12-ece38fb56df7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017937218s May 8 21:54:54.689: INFO: Pod "pod-projected-configmaps-42db0479-9c7d-4474-9e12-ece38fb56df7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042527599s STEP: Saw pod success May 8 21:54:54.689: INFO: Pod "pod-projected-configmaps-42db0479-9c7d-4474-9e12-ece38fb56df7" satisfied condition "success or failure" May 8 21:54:54.692: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-42db0479-9c7d-4474-9e12-ece38fb56df7 container projected-configmap-volume-test: STEP: delete the pod May 8 21:54:54.736: INFO: Waiting for pod pod-projected-configmaps-42db0479-9c7d-4474-9e12-ece38fb56df7 to disappear May 8 21:54:54.748: INFO: Pod pod-projected-configmaps-42db0479-9c7d-4474-9e12-ece38fb56df7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:54:54.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-579" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2118,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:54:54.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-5d22faaf-6927-4173-878b-eeba8b8dddd6 STEP: Creating a pod to test consume configMaps May 8 21:54:54.834: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f6a4cf3b-961f-4ace-b95f-383d98c762b1" in namespace "projected-855" to be "success or failure" May 8 21:54:54.838: INFO: Pod "pod-projected-configmaps-f6a4cf3b-961f-4ace-b95f-383d98c762b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140128ms May 8 21:54:56.880: INFO: Pod "pod-projected-configmaps-f6a4cf3b-961f-4ace-b95f-383d98c762b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046612841s May 8 21:54:58.941: INFO: Pod "pod-projected-configmaps-f6a4cf3b-961f-4ace-b95f-383d98c762b1": Phase="Running", Reason="", readiness=true. Elapsed: 4.107762583s May 8 21:55:00.946: INFO: Pod "pod-projected-configmaps-f6a4cf3b-961f-4ace-b95f-383d98c762b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.112832455s STEP: Saw pod success May 8 21:55:00.947: INFO: Pod "pod-projected-configmaps-f6a4cf3b-961f-4ace-b95f-383d98c762b1" satisfied condition "success or failure" May 8 21:55:00.950: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-f6a4cf3b-961f-4ace-b95f-383d98c762b1 container projected-configmap-volume-test: STEP: delete the pod May 8 21:55:00.999: INFO: Waiting for pod pod-projected-configmaps-f6a4cf3b-961f-4ace-b95f-383d98c762b1 to disappear May 8 21:55:01.012: INFO: Pod pod-projected-configmaps-f6a4cf3b-961f-4ace-b95f-383d98c762b1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:55:01.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-855" for this suite. • [SLOW TEST:6.263 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2132,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:55:01.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1843.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1843.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 21:55:07.155: INFO: DNS probes using dns-1843/dns-test-397a5a1d-5b66-406b-bde0-1b9157a857cc succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:55:07.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1843" for this suite. • [SLOW TEST:6.242 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":137,"skipped":2141,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:55:07.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 8 21:55:12.304: INFO: Successfully updated pod "labelsupdate0b692b81-87f5-47aa-af4a-64297c10fce0" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:55:16.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2620" for this suite. • [SLOW TEST:9.093 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:55:16.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:55:20.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3296" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2182,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:55:20.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 8 21:55:25.142: INFO: Successfully updated pod "labelsupdatea4136d66-dd34-4294-adc8-d40ece34a5ba" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:55:29.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9094" for this suite. • [SLOW TEST:8.714 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2189,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:55:29.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 21:55:29.827: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 21:55:31.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571729, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571729, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571729, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571729, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 21:55:34.911: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 8 21:55:39.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-611 to-be-attached-pod -i -c=container1' May 8 21:55:39.135: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:55:39.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-611" for this suite. STEP: Destroying namespace "webhook-611-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.019 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":141,"skipped":2212,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:55:39.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 21:55:39.284: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:55:43.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-378" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:55:43.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-1e64b892-b817-4594-90fd-aabf12797f5a STEP: Creating a pod to test consume configMaps May 8 21:55:43.483: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a1f3c39f-abb7-49e7-9535-5734f111a8cb" in namespace "projected-488" to be "success or failure" May 8 21:55:43.507: INFO: Pod "pod-projected-configmaps-a1f3c39f-abb7-49e7-9535-5734f111a8cb": Phase="Pending", Reason="", readiness=false. Elapsed: 23.420723ms May 8 21:55:45.522: INFO: Pod "pod-projected-configmaps-a1f3c39f-abb7-49e7-9535-5734f111a8cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039172867s May 8 21:55:47.526: INFO: Pod "pod-projected-configmaps-a1f3c39f-abb7-49e7-9535-5734f111a8cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042541958s STEP: Saw pod success May 8 21:55:47.526: INFO: Pod "pod-projected-configmaps-a1f3c39f-abb7-49e7-9535-5734f111a8cb" satisfied condition "success or failure" May 8 21:55:47.528: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-a1f3c39f-abb7-49e7-9535-5734f111a8cb container projected-configmap-volume-test: STEP: delete the pod May 8 21:55:47.632: INFO: Waiting for pod pod-projected-configmaps-a1f3c39f-abb7-49e7-9535-5734f111a8cb to disappear May 8 21:55:47.642: INFO: Pod pod-projected-configmaps-a1f3c39f-abb7-49e7-9535-5734f111a8cb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:55:47.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-488" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2292,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:55:47.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-2be0a6b6-8a35-49c2-8e2e-35c7baef18ce STEP: Creating a pod to test consume secrets May 8 21:55:47.728: INFO: Waiting up to 5m0s for pod "pod-secrets-d6d1c5af-a575-4dd0-a32b-1ffc80f056f1" in namespace "secrets-5" to be "success or failure" May 8 21:55:47.749: INFO: Pod "pod-secrets-d6d1c5af-a575-4dd0-a32b-1ffc80f056f1": Phase="Pending", Reason="", readiness=false. Elapsed: 21.561194ms May 8 21:55:49.789: INFO: Pod "pod-secrets-d6d1c5af-a575-4dd0-a32b-1ffc80f056f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061281287s May 8 21:55:51.793: INFO: Pod "pod-secrets-d6d1c5af-a575-4dd0-a32b-1ffc80f056f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064890664s STEP: Saw pod success May 8 21:55:51.793: INFO: Pod "pod-secrets-d6d1c5af-a575-4dd0-a32b-1ffc80f056f1" satisfied condition "success or failure" May 8 21:55:51.796: INFO: Trying to get logs from node jerma-worker pod pod-secrets-d6d1c5af-a575-4dd0-a32b-1ffc80f056f1 container secret-env-test: STEP: delete the pod May 8 21:55:51.811: INFO: Waiting for pod pod-secrets-d6d1c5af-a575-4dd0-a32b-1ffc80f056f1 to disappear May 8 21:55:51.816: INFO: Pod pod-secrets-d6d1c5af-a575-4dd0-a32b-1ffc80f056f1 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:55:51.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:55:51.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 21:55:52.572: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 21:55:54.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571752, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571752, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571752, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724571752, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 21:55:57.620: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:56:07.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6246" for this suite. STEP: Destroying namespace "webhook-6246-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.093 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":145,"skipped":2344,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:56:07.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 8 21:56:08.010: INFO: Waiting up to 5m0s for pod "var-expansion-67f55127-756c-4b97-aa3b-5dfd7a23552e" in namespace "var-expansion-8918" to be "success or failure" May 8 21:56:08.015: INFO: Pod "var-expansion-67f55127-756c-4b97-aa3b-5dfd7a23552e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.961367ms May 8 21:56:10.018: INFO: Pod "var-expansion-67f55127-756c-4b97-aa3b-5dfd7a23552e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008152281s May 8 21:56:12.059: INFO: Pod "var-expansion-67f55127-756c-4b97-aa3b-5dfd7a23552e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04891821s STEP: Saw pod success May 8 21:56:12.059: INFO: Pod "var-expansion-67f55127-756c-4b97-aa3b-5dfd7a23552e" satisfied condition "success or failure" May 8 21:56:12.224: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-67f55127-756c-4b97-aa3b-5dfd7a23552e container dapi-container: STEP: delete the pod May 8 21:56:12.293: INFO: Waiting for pod var-expansion-67f55127-756c-4b97-aa3b-5dfd7a23552e to disappear May 8 21:56:12.318: INFO: Pod var-expansion-67f55127-756c-4b97-aa3b-5dfd7a23552e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:56:12.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8918" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2353,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:56:12.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 21:56:12.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4395b8ec-9ce6-4a31-99d1-9fa93a5803a3" in namespace "downward-api-9686" to be "success or failure" May 8 21:56:12.427: INFO: Pod "downwardapi-volume-4395b8ec-9ce6-4a31-99d1-9fa93a5803a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.572464ms May 8 21:56:14.432: INFO: Pod "downwardapi-volume-4395b8ec-9ce6-4a31-99d1-9fa93a5803a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006615226s May 8 21:56:16.436: INFO: Pod "downwardapi-volume-4395b8ec-9ce6-4a31-99d1-9fa93a5803a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011308459s STEP: Saw pod success May 8 21:56:16.436: INFO: Pod "downwardapi-volume-4395b8ec-9ce6-4a31-99d1-9fa93a5803a3" satisfied condition "success or failure" May 8 21:56:16.439: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-4395b8ec-9ce6-4a31-99d1-9fa93a5803a3 container client-container: STEP: delete the pod May 8 21:56:16.633: INFO: Waiting for pod downwardapi-volume-4395b8ec-9ce6-4a31-99d1-9fa93a5803a3 to disappear May 8 21:56:16.686: INFO: Pod downwardapi-volume-4395b8ec-9ce6-4a31-99d1-9fa93a5803a3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:56:16.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9686" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:56:16.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-646ba6a1-4a8e-4a78-9f74-6cc81c77ea5e STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-646ba6a1-4a8e-4a78-9f74-6cc81c77ea5e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:56:22.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4611" for this suite. • [SLOW TEST:6.166 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2465,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:56:22.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 8 21:56:33.021: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 21:56:33.046: INFO: Pod pod-with-poststart-http-hook still exists May 8 21:56:35.046: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 21:56:35.051: INFO: Pod pod-with-poststart-http-hook still exists May 8 21:56:37.046: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 21:56:37.050: INFO: Pod pod-with-poststart-http-hook still exists May 8 21:56:39.046: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 21:56:39.050: INFO: Pod pod-with-poststart-http-hook still exists May 8 21:56:41.046: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 21:56:41.051: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:56:41.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1332" for this suite. • [SLOW TEST:18.203 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2474,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:56:41.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:56:57.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6474" for this suite. • [SLOW TEST:16.168 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":150,"skipped":2478,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:56:57.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 8 21:56:57.288: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 21:56:57.328: INFO: Waiting for terminating namespaces to be deleted... May 8 21:56:57.331: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 8 21:56:57.348: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 21:56:57.348: INFO: Container kindnet-cni ready: true, restart count 0 May 8 21:56:57.348: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 21:56:57.348: INFO: Container kube-proxy ready: true, restart count 0 May 8 21:56:57.348: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 8 21:56:57.353: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 8 21:56:57.353: INFO: Container kube-hunter ready: false, restart count 0 May 8 21:56:57.353: INFO: pod-handle-http-request from container-lifecycle-hook-1332 started at 2020-05-08 21:56:22 +0000 UTC (1 container statuses recorded) May 8 21:56:57.353: INFO: Container pod-handle-http-request ready: false, restart count 0 May 8 21:56:57.353: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 21:56:57.353: INFO: Container kindnet-cni ready: true, restart count 0 May 8 21:56:57.353: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 8 21:56:57.353: INFO: Container kube-bench ready: false, restart count 0 May 8 21:56:57.353: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 21:56:57.353: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-32353333-8041-459a-9434-19945836f79d 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-32353333-8041-459a-9434-19945836f79d off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-32353333-8041-459a-9434-19945836f79d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:57:05.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3831" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.359 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":151,"skipped":2498,"failed":0} SS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:57:05.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-e29a3b46-e00b-4a81-901c-005719156a84 in namespace container-probe-7320 May 8 21:57:09.745: INFO: Started pod busybox-e29a3b46-e00b-4a81-901c-005719156a84 in namespace container-probe-7320 STEP: checking the pod's current state and verifying that restartCount is present May 8 21:57:09.748: INFO: Initial restart count of pod busybox-e29a3b46-e00b-4a81-901c-005719156a84 is 0 May 8 21:57:58.488: INFO: Restart count of pod container-probe-7320/busybox-e29a3b46-e00b-4a81-901c-005719156a84 is now 1 (48.740506709s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:57:58.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7320" for this suite. • [SLOW TEST:52.982 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2500,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:57:58.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6519 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6519 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6519 May 8 21:57:59.126: INFO: Found 0 stateful pods, waiting for 1 May 8 21:58:09.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 8 21:58:09.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6519 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 8 21:58:09.370: INFO: stderr: "I0508 21:58:09.260867 2801 log.go:172] (0xc000116370) (0xc0006d3b80) Create stream\nI0508 21:58:09.260925 2801 log.go:172] (0xc000116370) (0xc0006d3b80) Stream added, broadcasting: 1\nI0508 21:58:09.263309 2801 log.go:172] (0xc000116370) Reply frame received for 1\nI0508 21:58:09.263347 2801 log.go:172] (0xc000116370) (0xc00091e000) Create stream\nI0508 21:58:09.263357 2801 log.go:172] (0xc000116370) (0xc00091e000) Stream added, broadcasting: 3\nI0508 21:58:09.264255 2801 log.go:172] (0xc000116370) Reply frame received for 3\nI0508 21:58:09.264283 2801 log.go:172] (0xc000116370) (0xc0006d3d60) Create stream\nI0508 21:58:09.264298 2801 log.go:172] (0xc000116370) (0xc0006d3d60) Stream added, broadcasting: 5\nI0508 21:58:09.265568 2801 log.go:172] (0xc000116370) Reply frame received for 5\nI0508 21:58:09.339948 2801 log.go:172] (0xc000116370) Data frame received for 5\nI0508 21:58:09.339979 2801 log.go:172] (0xc0006d3d60) (5) Data frame handling\nI0508 21:58:09.339998 2801 log.go:172] (0xc0006d3d60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 21:58:09.363712 2801 log.go:172] (0xc000116370) Data frame received for 5\nI0508 21:58:09.363742 2801 log.go:172] (0xc0006d3d60) (5) Data frame handling\nI0508 21:58:09.363764 2801 log.go:172] (0xc000116370) Data frame received for 3\nI0508 21:58:09.363772 2801 log.go:172] (0xc00091e000) (3) Data frame handling\nI0508 21:58:09.363783 2801 log.go:172] (0xc00091e000) (3) Data frame sent\nI0508 21:58:09.363794 2801 log.go:172] (0xc000116370) Data frame received for 3\nI0508 21:58:09.363801 2801 log.go:172] (0xc00091e000) (3) Data frame handling\nI0508 21:58:09.365645 2801 log.go:172] (0xc000116370) Data frame received for 1\nI0508 21:58:09.365664 2801 log.go:172] (0xc0006d3b80) (1) Data frame handling\nI0508 21:58:09.365680 2801 log.go:172] (0xc0006d3b80) (1) Data frame sent\nI0508 21:58:09.365696 2801 log.go:172] (0xc000116370) (0xc0006d3b80) Stream removed, broadcasting: 1\nI0508 21:58:09.365711 2801 log.go:172] (0xc000116370) Go away received\nI0508 21:58:09.365979 2801 log.go:172] (0xc000116370) (0xc0006d3b80) Stream removed, broadcasting: 1\nI0508 21:58:09.365996 2801 log.go:172] (0xc000116370) (0xc00091e000) Stream removed, broadcasting: 3\nI0508 21:58:09.366003 2801 log.go:172] (0xc000116370) (0xc0006d3d60) Stream removed, broadcasting: 5\n" May 8 21:58:09.370: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 8 21:58:09.370: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 8 21:58:09.374: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 8 21:58:19.378: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 8 21:58:19.379: INFO: Waiting for statefulset status.replicas updated to 0 May 8 21:58:19.409: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999724s May 8 21:58:20.447: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.97669419s May 8 21:58:21.452: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.938362443s May 8 21:58:22.456: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.933319226s May 8 21:58:23.461: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.929638606s May 8 21:58:24.465: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.924502421s May 8 21:58:25.470: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.920726091s May 8 21:58:26.474: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.915862622s May 8 21:58:27.478: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.911405471s May 8 21:58:28.482: INFO: Verifying statefulset ss doesn't scale past 1 for another 907.920786ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6519 May 8 21:58:29.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6519 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:58:29.710: INFO: stderr: "I0508 21:58:29.611851 2822 log.go:172] (0xc0003d6000) (0xc00058a6e0) Create stream\nI0508 21:58:29.611928 2822 log.go:172] (0xc0003d6000) (0xc00058a6e0) Stream added, broadcasting: 1\nI0508 21:58:29.614634 2822 log.go:172] (0xc0003d6000) Reply frame received for 1\nI0508 21:58:29.614686 2822 log.go:172] (0xc0003d6000) (0xc0007474a0) Create stream\nI0508 21:58:29.614701 2822 log.go:172] (0xc0003d6000) (0xc0007474a0) Stream added, broadcasting: 3\nI0508 21:58:29.615706 2822 log.go:172] (0xc0003d6000) Reply frame received for 3\nI0508 21:58:29.615751 2822 log.go:172] (0xc0003d6000) (0xc00090a000) Create stream\nI0508 21:58:29.615763 2822 log.go:172] (0xc0003d6000) (0xc00090a000) Stream added, broadcasting: 5\nI0508 21:58:29.616819 2822 log.go:172] (0xc0003d6000) Reply frame received for 5\nI0508 21:58:29.704673 2822 log.go:172] (0xc0003d6000) Data frame received for 3\nI0508 21:58:29.704732 2822 log.go:172] (0xc0003d6000) Data frame received for 5\nI0508 21:58:29.704766 2822 log.go:172] (0xc00090a000) (5) Data frame handling\nI0508 21:58:29.704784 2822 log.go:172] (0xc00090a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0508 21:58:29.704795 2822 log.go:172] (0xc0003d6000) Data frame received for 5\nI0508 21:58:29.704826 2822 log.go:172] (0xc00090a000) (5) Data frame handling\nI0508 21:58:29.704848 2822 log.go:172] (0xc0007474a0) (3) Data frame handling\nI0508 21:58:29.704860 2822 log.go:172] (0xc0007474a0) (3) Data frame sent\nI0508 21:58:29.704872 2822 log.go:172] (0xc0003d6000) Data frame received for 3\nI0508 21:58:29.704894 2822 log.go:172] (0xc0007474a0) (3) Data frame handling\nI0508 21:58:29.706418 2822 log.go:172] (0xc0003d6000) Data frame received for 1\nI0508 21:58:29.706453 2822 log.go:172] (0xc00058a6e0) (1) Data frame handling\nI0508 21:58:29.706470 2822 log.go:172] (0xc00058a6e0) (1) Data frame sent\nI0508 21:58:29.706488 2822 log.go:172] (0xc0003d6000) (0xc00058a6e0) Stream removed, broadcasting: 1\nI0508 21:58:29.706507 2822 log.go:172] (0xc0003d6000) Go away received\nI0508 21:58:29.706984 2822 log.go:172] (0xc0003d6000) (0xc00058a6e0) Stream removed, broadcasting: 1\nI0508 21:58:29.707004 2822 log.go:172] (0xc0003d6000) (0xc0007474a0) Stream removed, broadcasting: 3\nI0508 21:58:29.707014 2822 log.go:172] (0xc0003d6000) (0xc00090a000) Stream removed, broadcasting: 5\n" May 8 21:58:29.711: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 8 21:58:29.711: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 8 21:58:29.714: INFO: Found 1 stateful pods, waiting for 3 May 8 21:58:39.719: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 8 21:58:39.719: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 8 21:58:39.719: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 8 21:58:39.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6519 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 8 21:58:39.952: INFO: stderr: "I0508 21:58:39.856029 2842 log.go:172] (0xc0001056b0) (0xc0007601e0) Create stream\nI0508 21:58:39.856104 2842 log.go:172] (0xc0001056b0) (0xc0007601e0) Stream added, broadcasting: 1\nI0508 21:58:39.858331 2842 log.go:172] (0xc0001056b0) Reply frame received for 1\nI0508 21:58:39.858402 2842 log.go:172] (0xc0001056b0) (0xc0008b2000) Create stream\nI0508 21:58:39.858434 2842 log.go:172] (0xc0001056b0) (0xc0008b2000) Stream added, broadcasting: 3\nI0508 21:58:39.859508 2842 log.go:172] (0xc0001056b0) Reply frame received for 3\nI0508 21:58:39.859550 2842 log.go:172] (0xc0001056b0) (0xc0008de000) Create stream\nI0508 21:58:39.859564 2842 log.go:172] (0xc0001056b0) (0xc0008de000) Stream added, broadcasting: 5\nI0508 21:58:39.860702 2842 log.go:172] (0xc0001056b0) Reply frame received for 5\nI0508 21:58:39.945674 2842 log.go:172] (0xc0001056b0) Data frame received for 5\nI0508 21:58:39.945741 2842 log.go:172] (0xc0008de000) (5) Data frame handling\nI0508 21:58:39.945769 2842 log.go:172] (0xc0008de000) (5) Data frame sent\nI0508 21:58:39.945789 2842 log.go:172] (0xc0001056b0) Data frame received for 5\nI0508 21:58:39.945808 2842 log.go:172] (0xc0008de000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 21:58:39.945841 2842 log.go:172] (0xc0001056b0) Data frame received for 3\nI0508 21:58:39.945873 2842 log.go:172] (0xc0008b2000) (3) Data frame handling\nI0508 21:58:39.945901 2842 log.go:172] (0xc0008b2000) (3) Data frame sent\nI0508 21:58:39.946155 2842 log.go:172] (0xc0001056b0) Data frame received for 3\nI0508 21:58:39.946176 2842 log.go:172] (0xc0008b2000) (3) Data frame handling\nI0508 21:58:39.947361 2842 log.go:172] (0xc0001056b0) Data frame received for 1\nI0508 21:58:39.947396 2842 log.go:172] (0xc0007601e0) (1) Data frame handling\nI0508 21:58:39.947416 2842 log.go:172] (0xc0007601e0) (1) Data frame sent\nI0508 21:58:39.947492 2842 log.go:172] (0xc0001056b0) (0xc0007601e0) Stream removed, broadcasting: 1\nI0508 21:58:39.947660 2842 log.go:172] (0xc0001056b0) Go away received\nI0508 21:58:39.947782 2842 log.go:172] (0xc0001056b0) (0xc0007601e0) Stream removed, broadcasting: 1\nI0508 21:58:39.947804 2842 log.go:172] (0xc0001056b0) (0xc0008b2000) Stream removed, broadcasting: 3\nI0508 21:58:39.947815 2842 log.go:172] (0xc0001056b0) (0xc0008de000) Stream removed, broadcasting: 5\n" May 8 21:58:39.952: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 8 21:58:39.952: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 8 21:58:39.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6519 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 8 21:58:40.212: INFO: stderr: "I0508 21:58:40.087937 2862 log.go:172] (0xc00096a0b0) (0xc00077b5e0) Create stream\nI0508 21:58:40.088004 2862 log.go:172] (0xc00096a0b0) (0xc00077b5e0) Stream added, broadcasting: 1\nI0508 21:58:40.091457 2862 log.go:172] (0xc00096a0b0) Reply frame received for 1\nI0508 21:58:40.091480 2862 log.go:172] (0xc00096a0b0) (0xc0007d80a0) Create stream\nI0508 21:58:40.091487 2862 log.go:172] (0xc00096a0b0) (0xc0007d80a0) Stream added, broadcasting: 3\nI0508 21:58:40.092512 2862 log.go:172] (0xc00096a0b0) Reply frame received for 3\nI0508 21:58:40.092549 2862 log.go:172] (0xc00096a0b0) (0xc0007d8140) Create stream\nI0508 21:58:40.092564 2862 log.go:172] (0xc00096a0b0) (0xc0007d8140) Stream added, broadcasting: 5\nI0508 21:58:40.093868 2862 log.go:172] (0xc00096a0b0) Reply frame received for 5\nI0508 21:58:40.167089 2862 log.go:172] (0xc00096a0b0) Data frame received for 5\nI0508 21:58:40.167118 2862 log.go:172] (0xc0007d8140) (5) Data frame handling\nI0508 21:58:40.167137 2862 log.go:172] (0xc0007d8140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 21:58:40.203612 2862 log.go:172] (0xc00096a0b0) Data frame received for 3\nI0508 21:58:40.203657 2862 log.go:172] (0xc0007d80a0) (3) Data frame handling\nI0508 21:58:40.203697 2862 log.go:172] (0xc0007d80a0) (3) Data frame sent\nI0508 21:58:40.203906 2862 log.go:172] (0xc00096a0b0) Data frame received for 3\nI0508 21:58:40.203935 2862 log.go:172] (0xc0007d80a0) (3) Data frame handling\nI0508 21:58:40.204076 2862 log.go:172] (0xc00096a0b0) Data frame received for 5\nI0508 21:58:40.204116 2862 log.go:172] (0xc0007d8140) (5) Data frame handling\nI0508 21:58:40.206247 2862 log.go:172] (0xc00096a0b0) Data frame received for 1\nI0508 21:58:40.206280 2862 log.go:172] (0xc00077b5e0) (1) Data frame handling\nI0508 21:58:40.206300 2862 log.go:172] (0xc00077b5e0) (1) Data frame sent\nI0508 21:58:40.206315 2862 log.go:172] (0xc00096a0b0) (0xc00077b5e0) Stream removed, broadcasting: 1\nI0508 21:58:40.206356 2862 log.go:172] (0xc00096a0b0) Go away received\nI0508 21:58:40.206823 2862 log.go:172] (0xc00096a0b0) (0xc00077b5e0) Stream removed, broadcasting: 1\nI0508 21:58:40.206858 2862 log.go:172] (0xc00096a0b0) (0xc0007d80a0) Stream removed, broadcasting: 3\nI0508 21:58:40.206879 2862 log.go:172] (0xc00096a0b0) (0xc0007d8140) Stream removed, broadcasting: 5\n" May 8 21:58:40.213: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 8 21:58:40.213: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 8 21:58:40.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6519 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 8 21:58:40.505: INFO: stderr: "I0508 21:58:40.382682 2884 log.go:172] (0xc000592dc0) (0xc000930000) Create stream\nI0508 21:58:40.382749 2884 log.go:172] (0xc000592dc0) (0xc000930000) Stream added, broadcasting: 1\nI0508 21:58:40.386032 2884 log.go:172] (0xc000592dc0) Reply frame received for 1\nI0508 21:58:40.386077 2884 log.go:172] (0xc000592dc0) (0xc000a14000) Create stream\nI0508 21:58:40.386088 2884 log.go:172] (0xc000592dc0) (0xc000a14000) Stream added, broadcasting: 3\nI0508 21:58:40.387082 2884 log.go:172] (0xc000592dc0) Reply frame received for 3\nI0508 21:58:40.387123 2884 log.go:172] (0xc000592dc0) (0xc000930140) Create stream\nI0508 21:58:40.387136 2884 log.go:172] (0xc000592dc0) (0xc000930140) Stream added, broadcasting: 5\nI0508 21:58:40.388209 2884 log.go:172] (0xc000592dc0) Reply frame received for 5\nI0508 21:58:40.454509 2884 log.go:172] (0xc000592dc0) Data frame received for 5\nI0508 21:58:40.454536 2884 log.go:172] (0xc000930140) (5) Data frame handling\nI0508 21:58:40.454554 2884 log.go:172] (0xc000930140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 21:58:40.498358 2884 log.go:172] (0xc000592dc0) Data frame received for 3\nI0508 21:58:40.498379 2884 log.go:172] (0xc000a14000) (3) Data frame handling\nI0508 21:58:40.498386 2884 log.go:172] (0xc000a14000) (3) Data frame sent\nI0508 21:58:40.498805 2884 log.go:172] (0xc000592dc0) Data frame received for 5\nI0508 21:58:40.498826 2884 log.go:172] (0xc000930140) (5) Data frame handling\nI0508 21:58:40.498858 2884 log.go:172] (0xc000592dc0) Data frame received for 3\nI0508 21:58:40.498883 2884 log.go:172] (0xc000a14000) (3) Data frame handling\nI0508 21:58:40.501045 2884 log.go:172] (0xc000592dc0) Data frame received for 1\nI0508 21:58:40.501064 2884 log.go:172] (0xc000930000) (1) Data frame handling\nI0508 21:58:40.501080 2884 log.go:172] (0xc000930000) (1) Data frame sent\nI0508 21:58:40.501091 2884 log.go:172] (0xc000592dc0) (0xc000930000) Stream removed, broadcasting: 1\nI0508 21:58:40.501589 2884 log.go:172] (0xc000592dc0) (0xc000930000) Stream removed, broadcasting: 1\nI0508 21:58:40.501616 2884 log.go:172] (0xc000592dc0) (0xc000a14000) Stream removed, broadcasting: 3\nI0508 21:58:40.501670 2884 log.go:172] (0xc000592dc0) Go away received\nI0508 21:58:40.501790 2884 log.go:172] (0xc000592dc0) (0xc000930140) Stream removed, broadcasting: 5\n" May 8 21:58:40.505: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 8 21:58:40.505: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 8 21:58:40.505: INFO: Waiting for statefulset status.replicas updated to 0 May 8 21:58:40.515: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 8 21:58:50.534: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 8 21:58:50.534: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 8 21:58:50.534: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 8 21:58:50.546: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999758s May 8 21:58:51.556: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993355274s May 8 21:58:52.561: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984002034s May 8 21:58:53.567: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.978110827s May 8 21:58:54.578: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.970776292s May 8 21:58:55.583: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.961233762s May 8 21:58:56.589: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.956096235s May 8 21:58:57.598: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.950477195s May 8 21:58:58.602: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.941873711s May 8 21:58:59.608: INFO: Verifying statefulset ss doesn't scale past 3 for another 937.071069ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6519 May 8 21:59:00.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6519 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:59:00.826: INFO: stderr: "I0508 21:59:00.733676 2904 log.go:172] (0xc0003db080) (0xc000968000) Create stream\nI0508 21:59:00.733737 2904 log.go:172] (0xc0003db080) (0xc000968000) Stream added, broadcasting: 1\nI0508 21:59:00.736207 2904 log.go:172] (0xc0003db080) Reply frame received for 1\nI0508 21:59:00.736268 2904 log.go:172] (0xc0003db080) (0xc0009d2000) Create stream\nI0508 21:59:00.736286 2904 log.go:172] (0xc0003db080) (0xc0009d2000) Stream added, broadcasting: 3\nI0508 21:59:00.737526 2904 log.go:172] (0xc0003db080) Reply frame received for 3\nI0508 21:59:00.737571 2904 log.go:172] (0xc0003db080) (0xc0009680a0) Create stream\nI0508 21:59:00.737590 2904 log.go:172] (0xc0003db080) (0xc0009680a0) Stream added, broadcasting: 5\nI0508 21:59:00.738718 2904 log.go:172] (0xc0003db080) Reply frame received for 5\nI0508 21:59:00.819810 2904 log.go:172] (0xc0003db080) Data frame received for 3\nI0508 21:59:00.819843 2904 log.go:172] (0xc0009d2000) (3) Data frame handling\nI0508 21:59:00.819865 2904 log.go:172] (0xc0003db080) Data frame received for 5\nI0508 21:59:00.819888 2904 log.go:172] (0xc0009680a0) (5) Data frame handling\nI0508 21:59:00.819908 2904 log.go:172] (0xc0009680a0) (5) Data frame sent\nI0508 21:59:00.819919 2904 log.go:172] (0xc0003db080) Data frame received for 5\nI0508 21:59:00.819926 2904 log.go:172] (0xc0009680a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0508 21:59:00.819945 2904 log.go:172] (0xc0009d2000) (3) Data frame sent\nI0508 21:59:00.819960 2904 log.go:172] (0xc0003db080) Data frame received for 3\nI0508 21:59:00.819982 2904 log.go:172] (0xc0009d2000) (3) Data frame handling\nI0508 21:59:00.821534 2904 log.go:172] (0xc0003db080) Data frame received for 1\nI0508 21:59:00.821552 2904 log.go:172] (0xc000968000) (1) Data frame handling\nI0508 21:59:00.821561 2904 log.go:172] (0xc000968000) (1) Data frame sent\nI0508 21:59:00.821586 2904 log.go:172] (0xc0003db080) (0xc000968000) Stream removed, broadcasting: 1\nI0508 21:59:00.821610 2904 log.go:172] (0xc0003db080) Go away received\nI0508 21:59:00.821977 2904 log.go:172] (0xc0003db080) (0xc000968000) Stream removed, broadcasting: 1\nI0508 21:59:00.821995 2904 log.go:172] (0xc0003db080) (0xc0009d2000) Stream removed, broadcasting: 3\nI0508 21:59:00.822002 2904 log.go:172] (0xc0003db080) (0xc0009680a0) Stream removed, broadcasting: 5\n" May 8 21:59:00.826: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 8 21:59:00.826: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 8 21:59:00.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6519 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:59:01.030: INFO: stderr: "I0508 21:59:00.950867 2926 log.go:172] (0xc000a40000) (0xc000b58000) Create stream\nI0508 21:59:00.950920 2926 log.go:172] (0xc000a40000) (0xc000b58000) Stream added, broadcasting: 1\nI0508 21:59:00.952988 2926 log.go:172] (0xc000a40000) Reply frame received for 1\nI0508 21:59:00.953032 2926 log.go:172] (0xc000a40000) (0xc000017f40) Create stream\nI0508 21:59:00.953044 2926 log.go:172] (0xc000a40000) (0xc000017f40) Stream added, broadcasting: 3\nI0508 21:59:00.954497 2926 log.go:172] (0xc000a40000) Reply frame received for 3\nI0508 21:59:00.954567 2926 log.go:172] (0xc000a40000) (0xc0009d4000) Create stream\nI0508 21:59:00.954601 2926 log.go:172] (0xc000a40000) (0xc0009d4000) Stream added, broadcasting: 5\nI0508 21:59:00.955461 2926 log.go:172] (0xc000a40000) Reply frame received for 5\nI0508 21:59:01.023525 2926 log.go:172] (0xc000a40000) Data frame received for 3\nI0508 21:59:01.023567 2926 log.go:172] (0xc000017f40) (3) Data frame handling\nI0508 21:59:01.023583 2926 log.go:172] (0xc000017f40) (3) Data frame sent\nI0508 21:59:01.023594 2926 log.go:172] (0xc000a40000) Data frame received for 3\nI0508 21:59:01.023602 2926 log.go:172] (0xc000017f40) (3) Data frame handling\nI0508 21:59:01.023642 2926 log.go:172] (0xc000a40000) Data frame received for 5\nI0508 21:59:01.023665 2926 log.go:172] (0xc0009d4000) (5) Data frame handling\nI0508 21:59:01.023684 2926 log.go:172] (0xc0009d4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0508 21:59:01.023707 2926 log.go:172] (0xc000a40000) Data frame received for 5\nI0508 21:59:01.023715 2926 log.go:172] (0xc0009d4000) (5) Data frame handling\nI0508 21:59:01.025105 2926 log.go:172] (0xc000a40000) Data frame received for 1\nI0508 21:59:01.025266 2926 log.go:172] (0xc000b58000) (1) Data frame handling\nI0508 21:59:01.025283 2926 log.go:172] (0xc000b58000) (1) Data frame sent\nI0508 21:59:01.025306 2926 log.go:172] (0xc000a40000) (0xc000b58000) Stream removed, broadcasting: 1\nI0508 21:59:01.025321 2926 log.go:172] (0xc000a40000) Go away received\nI0508 21:59:01.025632 2926 log.go:172] (0xc000a40000) (0xc000b58000) Stream removed, broadcasting: 1\nI0508 21:59:01.025646 2926 log.go:172] (0xc000a40000) (0xc000017f40) Stream removed, broadcasting: 3\nI0508 21:59:01.025653 2926 log.go:172] (0xc000a40000) (0xc0009d4000) Stream removed, broadcasting: 5\n" May 8 21:59:01.030: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 8 21:59:01.030: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 8 21:59:01.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6519 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 21:59:01.256: INFO: stderr: "I0508 21:59:01.164752 2946 log.go:172] (0xc0000f4c60) (0xc00096a000) Create stream\nI0508 21:59:01.164845 2946 log.go:172] (0xc0000f4c60) (0xc00096a000) Stream added, broadcasting: 1\nI0508 21:59:01.171850 2946 log.go:172] (0xc0000f4c60) Reply frame received for 1\nI0508 21:59:01.171888 2946 log.go:172] (0xc0000f4c60) (0xc00096a0a0) Create stream\nI0508 21:59:01.171898 2946 log.go:172] (0xc0000f4c60) (0xc00096a0a0) Stream added, broadcasting: 3\nI0508 21:59:01.173566 2946 log.go:172] (0xc0000f4c60) Reply frame received for 3\nI0508 21:59:01.173598 2946 log.go:172] (0xc0000f4c60) (0xc00096a140) Create stream\nI0508 21:59:01.173606 2946 log.go:172] (0xc0000f4c60) (0xc00096a140) Stream added, broadcasting: 5\nI0508 21:59:01.175130 2946 log.go:172] (0xc0000f4c60) Reply frame received for 5\nI0508 21:59:01.244019 2946 log.go:172] (0xc0000f4c60) Data frame received for 5\nI0508 21:59:01.244052 2946 log.go:172] (0xc00096a140) (5) Data frame handling\nI0508 21:59:01.244077 2946 log.go:172] (0xc00096a140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0508 21:59:01.250064 2946 log.go:172] (0xc0000f4c60) Data frame received for 3\nI0508 21:59:01.250084 2946 log.go:172] (0xc00096a0a0) (3) Data frame handling\nI0508 21:59:01.250096 2946 log.go:172] (0xc00096a0a0) (3) Data frame sent\nI0508 21:59:01.250225 2946 log.go:172] (0xc0000f4c60) Data frame received for 5\nI0508 21:59:01.250255 2946 log.go:172] (0xc00096a140) (5) Data frame handling\nI0508 21:59:01.250280 2946 log.go:172] (0xc0000f4c60) Data frame received for 3\nI0508 21:59:01.250299 2946 log.go:172] (0xc00096a0a0) (3) Data frame handling\nI0508 21:59:01.251614 2946 log.go:172] (0xc0000f4c60) Data frame received for 1\nI0508 21:59:01.251631 2946 log.go:172] (0xc00096a000) (1) Data frame handling\nI0508 21:59:01.251645 2946 log.go:172] (0xc00096a000) (1) Data frame sent\nI0508 21:59:01.251656 2946 log.go:172] (0xc0000f4c60) (0xc00096a000) Stream removed, broadcasting: 1\nI0508 21:59:01.251742 2946 log.go:172] (0xc0000f4c60) Go away received\nI0508 21:59:01.251967 2946 log.go:172] (0xc0000f4c60) (0xc00096a000) Stream removed, broadcasting: 1\nI0508 21:59:01.251980 2946 log.go:172] (0xc0000f4c60) (0xc00096a0a0) Stream removed, broadcasting: 3\nI0508 21:59:01.251989 2946 log.go:172] (0xc0000f4c60) (0xc00096a140) Stream removed, broadcasting: 5\n" May 8 21:59:01.256: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 8 21:59:01.256: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 8 21:59:01.256: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 8 21:59:31.284: INFO: Deleting all statefulset in ns statefulset-6519 May 8 21:59:31.287: INFO: Scaling statefulset ss to 0 May 8 21:59:31.297: INFO: Waiting for statefulset status.replicas updated to 0 May 8 21:59:31.300: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:59:31.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6519" for this suite. • [SLOW TEST:92.741 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":153,"skipped":2555,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:59:31.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 21:59:44.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9876" for this suite. • [SLOW TEST:13.216 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":154,"skipped":2571,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 21:59:44.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0508 22:00:15.148075 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 22:00:15.148: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:00:15.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5694" for this suite. • [SLOW TEST:30.616 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":155,"skipped":2572,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:00:15.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:00:15.251: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-06124d6f-47f8-47da-a4ab-73d65abea590" in namespace "security-context-test-6766" to be "success or failure" May 8 22:00:15.292: INFO: Pod "busybox-privileged-false-06124d6f-47f8-47da-a4ab-73d65abea590": Phase="Pending", Reason="", readiness=false. Elapsed: 41.018848ms May 8 22:00:17.297: INFO: Pod "busybox-privileged-false-06124d6f-47f8-47da-a4ab-73d65abea590": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04569839s May 8 22:00:19.301: INFO: Pod "busybox-privileged-false-06124d6f-47f8-47da-a4ab-73d65abea590": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050034615s May 8 22:00:19.301: INFO: Pod "busybox-privileged-false-06124d6f-47f8-47da-a4ab-73d65abea590" satisfied condition "success or failure" May 8 22:00:19.322: INFO: Got logs for pod "busybox-privileged-false-06124d6f-47f8-47da-a4ab-73d65abea590": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:00:19.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6766" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2599,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:00:19.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 8 22:00:19.459: INFO: Waiting up to 5m0s for pod "pod-f0df8ef7-bd1d-45e9-9d17-fe4a17219e89" in namespace "emptydir-9277" to be "success or failure" May 8 22:00:19.472: INFO: Pod "pod-f0df8ef7-bd1d-45e9-9d17-fe4a17219e89": Phase="Pending", Reason="", readiness=false. Elapsed: 12.993634ms May 8 22:00:21.477: INFO: Pod "pod-f0df8ef7-bd1d-45e9-9d17-fe4a17219e89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017811878s May 8 22:00:23.487: INFO: Pod "pod-f0df8ef7-bd1d-45e9-9d17-fe4a17219e89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027495058s STEP: Saw pod success May 8 22:00:23.487: INFO: Pod "pod-f0df8ef7-bd1d-45e9-9d17-fe4a17219e89" satisfied condition "success or failure" May 8 22:00:23.489: INFO: Trying to get logs from node jerma-worker pod pod-f0df8ef7-bd1d-45e9-9d17-fe4a17219e89 container test-container: STEP: delete the pod May 8 22:00:23.509: INFO: Waiting for pod pod-f0df8ef7-bd1d-45e9-9d17-fe4a17219e89 to disappear May 8 22:00:23.533: INFO: Pod pod-f0df8ef7-bd1d-45e9-9d17-fe4a17219e89 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:00:23.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9277" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2602,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:00:23.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 8 22:00:31.733: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 22:00:31.739: INFO: Pod pod-with-poststart-exec-hook still exists May 8 22:00:33.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 22:00:33.743: INFO: Pod pod-with-poststart-exec-hook still exists May 8 22:00:35.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 22:00:35.744: INFO: Pod pod-with-poststart-exec-hook still exists May 8 22:00:37.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 22:00:37.743: INFO: Pod pod-with-poststart-exec-hook still exists May 8 22:00:39.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 22:00:39.746: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:00:39.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6379" for this suite. • [SLOW TEST:16.215 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2609,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:00:39.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 22:00:39.865: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5fbfa2ee-2808-4e73-ad61-cef493ee930a" in namespace "downward-api-7922" to be "success or failure" May 8 22:00:39.871: INFO: Pod "downwardapi-volume-5fbfa2ee-2808-4e73-ad61-cef493ee930a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22767ms May 8 22:00:41.886: INFO: Pod "downwardapi-volume-5fbfa2ee-2808-4e73-ad61-cef493ee930a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020847072s May 8 22:00:43.890: INFO: Pod "downwardapi-volume-5fbfa2ee-2808-4e73-ad61-cef493ee930a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025231509s STEP: Saw pod success May 8 22:00:43.891: INFO: Pod "downwardapi-volume-5fbfa2ee-2808-4e73-ad61-cef493ee930a" satisfied condition "success or failure" May 8 22:00:43.894: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-5fbfa2ee-2808-4e73-ad61-cef493ee930a container client-container: STEP: delete the pod May 8 22:00:44.038: INFO: Waiting for pod downwardapi-volume-5fbfa2ee-2808-4e73-ad61-cef493ee930a to disappear May 8 22:00:44.179: INFO: Pod downwardapi-volume-5fbfa2ee-2808-4e73-ad61-cef493ee930a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:00:44.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7922" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2617,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:00:44.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 8 22:00:44.271: INFO: Waiting up to 5m0s for pod "var-expansion-561370be-0e88-40b1-8c6d-2fae7967b0f0" in namespace "var-expansion-7519" to be "success or failure" May 8 22:00:44.325: INFO: Pod "var-expansion-561370be-0e88-40b1-8c6d-2fae7967b0f0": Phase="Pending", Reason="", readiness=false. Elapsed: 54.191614ms May 8 22:00:46.330: INFO: Pod "var-expansion-561370be-0e88-40b1-8c6d-2fae7967b0f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058803562s May 8 22:00:48.359: INFO: Pod "var-expansion-561370be-0e88-40b1-8c6d-2fae7967b0f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087909662s STEP: Saw pod success May 8 22:00:48.359: INFO: Pod "var-expansion-561370be-0e88-40b1-8c6d-2fae7967b0f0" satisfied condition "success or failure" May 8 22:00:48.363: INFO: Trying to get logs from node jerma-worker pod var-expansion-561370be-0e88-40b1-8c6d-2fae7967b0f0 container dapi-container: STEP: delete the pod May 8 22:00:48.391: INFO: Waiting for pod var-expansion-561370be-0e88-40b1-8c6d-2fae7967b0f0 to disappear May 8 22:00:48.449: INFO: Pod var-expansion-561370be-0e88-40b1-8c6d-2fae7967b0f0 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:00:48.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7519" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2641,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:00:48.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 8 22:00:48.647: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:00:48.652: INFO: Number of nodes with available pods: 0 May 8 22:00:48.652: INFO: Node jerma-worker is running more than one daemon pod May 8 22:00:49.658: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:00:49.662: INFO: Number of nodes with available pods: 0 May 8 22:00:49.662: INFO: Node jerma-worker is running more than one daemon pod May 8 22:00:50.760: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:00:50.776: INFO: Number of nodes with available pods: 0 May 8 22:00:50.776: INFO: Node jerma-worker is running more than one daemon pod May 8 22:00:51.665: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:00:51.669: INFO: Number of nodes with available pods: 0 May 8 22:00:51.669: INFO: Node jerma-worker is running more than one daemon pod May 8 22:00:52.656: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:00:52.660: INFO: Number of nodes with available pods: 1 May 8 22:00:52.660: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:00:53.671: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:00:53.675: INFO: Number of nodes with available pods: 2 May 8 22:00:53.675: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 8 22:00:53.701: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:00:53.708: INFO: Number of nodes with available pods: 2 May 8 22:00:53.708: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5048, will wait for the garbage collector to delete the pods May 8 22:00:54.791: INFO: Deleting DaemonSet.extensions daemon-set took: 7.467737ms May 8 22:00:55.092: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.237805ms May 8 22:00:57.894: INFO: Number of nodes with available pods: 0 May 8 22:00:57.894: INFO: Number of running nodes: 0, number of available pods: 0 May 8 22:00:57.896: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5048/daemonsets","resourceVersion":"14544175"},"items":null} May 8 22:00:57.898: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5048/pods","resourceVersion":"14544175"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:00:57.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5048" for this suite. • [SLOW TEST:9.457 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":161,"skipped":2644,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:00:57.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 8 22:00:58.014: INFO: Waiting up to 5m0s for pod "pod-ef863d31-2e77-4292-9f34-2c3b385a99bf" in namespace "emptydir-956" to be "success or failure" May 8 22:00:58.037: INFO: Pod "pod-ef863d31-2e77-4292-9f34-2c3b385a99bf": Phase="Pending", Reason="", readiness=false. Elapsed: 22.430717ms May 8 22:01:00.041: INFO: Pod "pod-ef863d31-2e77-4292-9f34-2c3b385a99bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026570062s May 8 22:01:02.045: INFO: Pod "pod-ef863d31-2e77-4292-9f34-2c3b385a99bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031022451s STEP: Saw pod success May 8 22:01:02.046: INFO: Pod "pod-ef863d31-2e77-4292-9f34-2c3b385a99bf" satisfied condition "success or failure" May 8 22:01:02.049: INFO: Trying to get logs from node jerma-worker pod pod-ef863d31-2e77-4292-9f34-2c3b385a99bf container test-container: STEP: delete the pod May 8 22:01:02.088: INFO: Waiting for pod pod-ef863d31-2e77-4292-9f34-2c3b385a99bf to disappear May 8 22:01:02.102: INFO: Pod pod-ef863d31-2e77-4292-9f34-2c3b385a99bf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:01:02.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-956" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2669,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:01:02.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 22:01:02.193: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46a25e7b-98d0-4a3f-93c4-09a7fb8f69fb" in namespace "downward-api-1648" to be "success or failure" May 8 22:01:02.204: INFO: Pod "downwardapi-volume-46a25e7b-98d0-4a3f-93c4-09a7fb8f69fb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.494883ms May 8 22:01:04.208: INFO: Pod "downwardapi-volume-46a25e7b-98d0-4a3f-93c4-09a7fb8f69fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014908255s May 8 22:01:06.212: INFO: Pod "downwardapi-volume-46a25e7b-98d0-4a3f-93c4-09a7fb8f69fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019254458s STEP: Saw pod success May 8 22:01:06.213: INFO: Pod "downwardapi-volume-46a25e7b-98d0-4a3f-93c4-09a7fb8f69fb" satisfied condition "success or failure" May 8 22:01:06.216: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-46a25e7b-98d0-4a3f-93c4-09a7fb8f69fb container client-container: STEP: delete the pod May 8 22:01:06.242: INFO: Waiting for pod downwardapi-volume-46a25e7b-98d0-4a3f-93c4-09a7fb8f69fb to disappear May 8 22:01:06.272: INFO: Pod downwardapi-volume-46a25e7b-98d0-4a3f-93c4-09a7fb8f69fb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:01:06.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1648" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2695,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:01:06.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-lmfn5 in namespace proxy-4206 I0508 22:01:06.417844 6 runners.go:189] Created replication controller with name: proxy-service-lmfn5, namespace: proxy-4206, replica count: 1 I0508 22:01:07.468374 6 runners.go:189] proxy-service-lmfn5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 22:01:08.468580 6 runners.go:189] proxy-service-lmfn5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 22:01:09.468848 6 runners.go:189] proxy-service-lmfn5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0508 22:01:10.469101 6 runners.go:189] proxy-service-lmfn5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0508 22:01:11.469574 6 runners.go:189] proxy-service-lmfn5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0508 22:01:12.469821 6 runners.go:189] proxy-service-lmfn5 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 8 22:01:12.473: INFO: setup took 6.120379464s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 8 22:01:12.480: INFO: (0) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 6.013365ms) May 8 22:01:12.480: INFO: (0) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 6.193333ms) May 8 22:01:12.481: INFO: (0) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 7.70466ms) May 8 22:01:12.481: INFO: (0) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 7.883195ms) May 8 22:01:12.481: INFO: (0) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:1080/proxy/: ... (200; 7.823109ms) May 8 22:01:12.481: INFO: (0) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname1/proxy/: foo (200; 8.046881ms) May 8 22:01:12.481: INFO: (0) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n/proxy/: test (200; 8.049082ms) May 8 22:01:12.482: INFO: (0) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:1080/proxy/: test<... (200; 8.265797ms) May 8 22:01:12.484: INFO: (0) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname2/proxy/: bar (200; 10.522885ms) May 8 22:01:12.484: INFO: (0) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname1/proxy/: foo (200; 10.563916ms) May 8 22:01:12.484: INFO: (0) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname2/proxy/: bar (200; 10.915477ms) May 8 22:01:12.490: INFO: (0) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:460/proxy/: tls baz (200; 16.706355ms) May 8 22:01:12.490: INFO: (0) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:462/proxy/: tls qux (200; 16.820613ms) May 8 22:01:12.490: INFO: (0) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname1/proxy/: tls baz (200; 16.837587ms) May 8 22:01:12.490: INFO: (0) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 16.750699ms) May 8 22:01:12.491: INFO: (0) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: test<... (200; 3.768995ms) May 8 22:01:12.495: INFO: (1) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n/proxy/: test (200; 3.851776ms) May 8 22:01:12.495: INFO: (1) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: ... (200; 4.283172ms) May 8 22:01:12.497: INFO: (1) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 5.593982ms) May 8 22:01:12.497: INFO: (1) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname1/proxy/: foo (200; 6.066416ms) May 8 22:01:12.497: INFO: (1) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname1/proxy/: tls baz (200; 6.16235ms) May 8 22:01:12.497: INFO: (1) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname2/proxy/: bar (200; 6.067613ms) May 8 22:01:12.497: INFO: (1) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname2/proxy/: bar (200; 6.059934ms) May 8 22:01:12.497: INFO: (1) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname1/proxy/: foo (200; 6.518992ms) May 8 22:01:12.502: INFO: (2) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 4.192831ms) May 8 22:01:12.502: INFO: (2) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 4.161341ms) May 8 22:01:12.503: INFO: (2) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname2/proxy/: bar (200; 5.049057ms) May 8 22:01:12.503: INFO: (2) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname2/proxy/: bar (200; 5.514791ms) May 8 22:01:12.503: INFO: (2) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 5.682941ms) May 8 22:01:12.503: INFO: (2) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:1080/proxy/: ... (200; 5.621112ms) May 8 22:01:12.503: INFO: (2) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname1/proxy/: foo (200; 5.682871ms) May 8 22:01:12.503: INFO: (2) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n/proxy/: test (200; 5.637377ms) May 8 22:01:12.503: INFO: (2) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:460/proxy/: tls baz (200; 5.781016ms) May 8 22:01:12.504: INFO: (2) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname1/proxy/: tls baz (200; 6.023703ms) May 8 22:01:12.504: INFO: (2) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname1/proxy/: foo (200; 6.012934ms) May 8 22:01:12.504: INFO: (2) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 6.152454ms) May 8 22:01:12.504: INFO: (2) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 6.300879ms) May 8 22:01:12.504: INFO: (2) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:1080/proxy/: test<... (200; 6.592066ms) May 8 22:01:12.504: INFO: (2) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: ... (200; 4.741057ms) May 8 22:01:12.510: INFO: (3) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 5.088338ms) May 8 22:01:12.510: INFO: (3) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:1080/proxy/: test<... (200; 5.283599ms) May 8 22:01:12.510: INFO: (3) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:462/proxy/: tls qux (200; 5.35284ms) May 8 22:01:12.510: INFO: (3) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n/proxy/: test (200; 5.264404ms) May 8 22:01:12.510: INFO: (3) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 5.293165ms) May 8 22:01:12.510: INFO: (3) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:460/proxy/: tls baz (200; 5.503477ms) May 8 22:01:12.510: INFO: (3) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname1/proxy/: foo (200; 5.485881ms) May 8 22:01:12.510: INFO: (3) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname2/proxy/: bar (200; 5.562805ms) May 8 22:01:12.510: INFO: (3) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname1/proxy/: foo (200; 5.668584ms) May 8 22:01:12.510: INFO: (3) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 5.600018ms) May 8 22:01:12.511: INFO: (3) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 5.781348ms) May 8 22:01:12.511: INFO: (3) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: test<... (200; 3.814351ms) May 8 22:01:12.515: INFO: (4) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: test (200; 4.488665ms) May 8 22:01:12.516: INFO: (4) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 4.870844ms) May 8 22:01:12.516: INFO: (4) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:462/proxy/: tls qux (200; 5.167982ms) May 8 22:01:12.516: INFO: (4) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 5.251621ms) May 8 22:01:12.516: INFO: (4) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname2/proxy/: bar (200; 5.188947ms) May 8 22:01:12.516: INFO: (4) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname2/proxy/: bar (200; 5.20845ms) May 8 22:01:12.516: INFO: (4) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:1080/proxy/: ... (200; 5.256785ms) May 8 22:01:12.516: INFO: (4) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname1/proxy/: foo (200; 5.320709ms) May 8 22:01:12.517: INFO: (4) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname1/proxy/: foo (200; 5.29708ms) May 8 22:01:12.517: INFO: (4) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname1/proxy/: tls baz (200; 5.657766ms) May 8 22:01:12.520: INFO: (5) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:1080/proxy/: test<... (200; 3.134751ms) May 8 22:01:12.520: INFO: (5) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 3.331144ms) May 8 22:01:12.520: INFO: (5) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: ... (200; 3.417056ms) May 8 22:01:12.521: INFO: (5) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 4.053369ms) May 8 22:01:12.521: INFO: (5) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname1/proxy/: foo (200; 4.337981ms) May 8 22:01:12.521: INFO: (5) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname2/proxy/: bar (200; 4.283602ms) May 8 22:01:12.521: INFO: (5) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n/proxy/: test (200; 4.366551ms) May 8 22:01:12.521: INFO: (5) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname1/proxy/: foo (200; 4.348193ms) May 8 22:01:12.521: INFO: (5) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 4.426634ms) May 8 22:01:12.522: INFO: (5) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname2/proxy/: bar (200; 4.665029ms) May 8 22:01:12.522: INFO: (5) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 4.721399ms) May 8 22:01:12.522: INFO: (5) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:460/proxy/: tls baz (200; 4.684143ms) May 8 22:01:12.522: INFO: (5) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 4.721018ms) May 8 22:01:12.522: INFO: (5) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname1/proxy/: tls baz (200; 4.730404ms) May 8 22:01:12.529: INFO: (6) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 7.593634ms) May 8 22:01:12.530: INFO: (6) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:1080/proxy/: ... (200; 7.743077ms) May 8 22:01:12.530: INFO: (6) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:460/proxy/: tls baz (200; 7.968793ms) May 8 22:01:12.533: INFO: (6) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname2/proxy/: bar (200; 11.611272ms) May 8 22:01:12.534: INFO: (6) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: test<... (200; 12.332978ms) May 8 22:01:12.534: INFO: (6) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname1/proxy/: foo (200; 12.382082ms) May 8 22:01:12.534: INFO: (6) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 12.382114ms) May 8 22:01:12.535: INFO: (6) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname2/proxy/: bar (200; 12.880868ms) May 8 22:01:12.535: INFO: (6) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n/proxy/: test (200; 13.448202ms) May 8 22:01:12.536: INFO: (6) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:462/proxy/: tls qux (200; 13.772277ms) May 8 22:01:12.536: INFO: (6) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 13.775678ms) May 8 22:01:12.542: INFO: (7) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname1/proxy/: foo (200; 6.056476ms) May 8 22:01:12.542: INFO: (7) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:462/proxy/: tls qux (200; 6.097047ms) May 8 22:01:12.542: INFO: (7) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname1/proxy/: foo (200; 6.026865ms) May 8 22:01:12.542: INFO: (7) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname1/proxy/: tls baz (200; 5.998086ms) May 8 22:01:12.542: INFO: (7) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 6.663358ms) May 8 22:01:12.543: INFO: (7) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 6.784109ms) May 8 22:01:12.543: INFO: (7) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 7.017731ms) May 8 22:01:12.543: INFO: (7) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 7.020587ms) May 8 22:01:12.543: INFO: (7) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:1080/proxy/: ... (200; 7.078119ms) May 8 22:01:12.543: INFO: (7) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:1080/proxy/: test<... (200; 7.141942ms) May 8 22:01:12.543: INFO: (7) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n/proxy/: test (200; 7.153004ms) May 8 22:01:12.543: INFO: (7) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: test<... (200; 4.438264ms) May 8 22:01:12.548: INFO: (8) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:1080/proxy/: ... (200; 4.466198ms) May 8 22:01:12.548: INFO: (8) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname2/proxy/: bar (200; 4.502077ms) May 8 22:01:12.548: INFO: (8) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n/proxy/: test (200; 4.500456ms) May 8 22:01:12.548: INFO: (8) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname1/proxy/: foo (200; 4.556178ms) May 8 22:01:12.548: INFO: (8) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 4.524311ms) May 8 22:01:12.548: INFO: (8) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:460/proxy/: tls baz (200; 4.770281ms) May 8 22:01:12.548: INFO: (8) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 4.704676ms) May 8 22:01:12.548: INFO: (8) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname1/proxy/: foo (200; 4.81575ms) May 8 22:01:12.548: INFO: (8) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname2/proxy/: bar (200; 4.769616ms) May 8 22:01:12.548: INFO: (8) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname1/proxy/: tls baz (200; 4.876753ms) May 8 22:01:12.548: INFO: (8) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: test (200; 3.336094ms) May 8 22:01:12.552: INFO: (9) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: ... (200; 4.65022ms) May 8 22:01:12.553: INFO: (9) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 4.652909ms) May 8 22:01:12.553: INFO: (9) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:1080/proxy/: test<... (200; 4.735606ms) May 8 22:01:12.553: INFO: (9) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 4.731744ms) May 8 22:01:12.553: INFO: (9) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 4.915814ms) May 8 22:01:12.553: INFO: (9) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname1/proxy/: foo (200; 5.035243ms) May 8 22:01:12.553: INFO: (9) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname1/proxy/: tls baz (200; 5.014103ms) May 8 22:01:12.553: INFO: (9) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname2/proxy/: bar (200; 5.1757ms) May 8 22:01:12.553: INFO: (9) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 5.202199ms) May 8 22:01:12.553: INFO: (9) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 5.339526ms) May 8 22:01:12.554: INFO: (9) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:462/proxy/: tls qux (200; 5.514963ms) May 8 22:01:12.554: INFO: (9) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:460/proxy/: tls baz (200; 5.76002ms) May 8 22:01:12.559: INFO: (10) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 4.419867ms) May 8 22:01:12.559: INFO: (10) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:462/proxy/: tls qux (200; 4.588636ms) May 8 22:01:12.559: INFO: (10) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:1080/proxy/: ... (200; 4.711032ms) May 8 22:01:12.559: INFO: (10) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 4.35957ms) May 8 22:01:12.559: INFO: (10) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 4.284495ms) May 8 22:01:12.559: INFO: (10) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:1080/proxy/: test<... (200; 4.220886ms) May 8 22:01:12.559: INFO: (10) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname1/proxy/: foo (200; 4.715802ms) May 8 22:01:12.559: INFO: (10) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: test (200; 4.036212ms) May 8 22:01:12.559: INFO: (10) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 4.227638ms) May 8 22:01:12.559: INFO: (10) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:460/proxy/: tls baz (200; 4.598151ms) May 8 22:01:12.562: INFO: (10) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname2/proxy/: bar (200; 6.849081ms) May 8 22:01:12.562: INFO: (10) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname1/proxy/: tls baz (200; 6.691967ms) May 8 22:01:12.562: INFO: (10) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname1/proxy/: foo (200; 6.583196ms) May 8 22:01:12.562: INFO: (10) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname2/proxy/: bar (200; 6.789967ms) May 8 22:01:12.562: INFO: (10) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 6.541651ms) May 8 22:01:12.564: INFO: (11) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:1080/proxy/: test<... (200; 2.73392ms) May 8 22:01:12.566: INFO: (11) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n/proxy/: test (200; 4.271712ms) May 8 22:01:12.567: INFO: (11) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:1080/proxy/: ... (200; 4.920647ms) May 8 22:01:12.567: INFO: (11) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 4.973014ms) May 8 22:01:12.567: INFO: (11) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 5.317076ms) May 8 22:01:12.567: INFO: (11) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 5.395756ms) May 8 22:01:12.568: INFO: (11) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname1/proxy/: foo (200; 6.023863ms) May 8 22:01:12.568: INFO: (11) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname2/proxy/: bar (200; 6.128446ms) May 8 22:01:12.568: INFO: (11) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:460/proxy/: tls baz (200; 6.271448ms) May 8 22:01:12.568: INFO: (11) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 6.234479ms) May 8 22:01:12.568: INFO: (11) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 6.391359ms) May 8 22:01:12.568: INFO: (11) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname1/proxy/: foo (200; 6.445241ms) May 8 22:01:12.568: INFO: (11) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: test (200; 3.256936ms) May 8 22:01:12.572: INFO: (12) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: ... (200; 3.264392ms) May 8 22:01:12.572: INFO: (12) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 3.310453ms) May 8 22:01:12.572: INFO: (12) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:1080/proxy/: test<... (200; 3.334447ms) May 8 22:01:12.573: INFO: (12) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname2/proxy/: bar (200; 4.947985ms) May 8 22:01:12.574: INFO: (12) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname1/proxy/: foo (200; 5.185049ms) May 8 22:01:12.574: INFO: (12) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname1/proxy/: foo (200; 5.250774ms) May 8 22:01:12.574: INFO: (12) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname2/proxy/: bar (200; 5.272123ms) May 8 22:01:12.574: INFO: (12) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 5.309839ms) May 8 22:01:12.574: INFO: (12) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname1/proxy/: tls baz (200; 5.500597ms) May 8 22:01:12.577: INFO: (13) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:460/proxy/: tls baz (200; 3.128198ms) May 8 22:01:12.577: INFO: (13) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 3.170748ms) May 8 22:01:12.577: INFO: (13) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 3.21314ms) May 8 22:01:12.577: INFO: (13) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n/proxy/: test (200; 3.36815ms) May 8 22:01:12.577: INFO: (13) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:462/proxy/: tls qux (200; 3.379552ms) May 8 22:01:12.577: INFO: (13) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:1080/proxy/: test<... (200; 3.381608ms) May 8 22:01:12.577: INFO: (13) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 3.408686ms) May 8 22:01:12.579: INFO: (13) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname1/proxy/: foo (200; 4.689224ms) May 8 22:01:12.579: INFO: (13) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname2/proxy/: bar (200; 4.738689ms) May 8 22:01:12.579: INFO: (13) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname1/proxy/: foo (200; 4.67915ms) May 8 22:01:12.579: INFO: (13) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 4.665017ms) May 8 22:01:12.579: INFO: (13) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname2/proxy/: bar (200; 4.781621ms) May 8 22:01:12.579: INFO: (13) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:1080/proxy/: ... (200; 4.810301ms) May 8 22:01:12.579: INFO: (13) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: test (200; 5.694247ms) May 8 22:01:12.585: INFO: (14) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 5.746547ms) May 8 22:01:12.585: INFO: (14) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:1080/proxy/: ... (200; 5.873146ms) May 8 22:01:12.585: INFO: (14) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 5.895976ms) May 8 22:01:12.585: INFO: (14) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:1080/proxy/: test<... (200; 6.004592ms) May 8 22:01:12.585: INFO: (14) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 6.018341ms) May 8 22:01:12.585: INFO: (14) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 6.075593ms) May 8 22:01:12.585: INFO: (14) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: test (200; 3.829672ms) May 8 22:01:12.590: INFO: (15) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 4.168087ms) May 8 22:01:12.590: INFO: (15) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:460/proxy/: tls baz (200; 4.25207ms) May 8 22:01:12.590: INFO: (15) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:1080/proxy/: ... (200; 4.354979ms) May 8 22:01:12.590: INFO: (15) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 4.306711ms) May 8 22:01:12.590: INFO: (15) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:462/proxy/: tls qux (200; 4.269113ms) May 8 22:01:12.590: INFO: (15) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 4.344001ms) May 8 22:01:12.590: INFO: (15) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:1080/proxy/: test<... (200; 4.325905ms) May 8 22:01:12.590: INFO: (15) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: ... (200; 3.231237ms) May 8 22:01:12.596: INFO: (16) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 4.212831ms) May 8 22:01:12.596: INFO: (16) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 4.444279ms) May 8 22:01:12.596: INFO: (16) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: test (200; 4.49693ms) May 8 22:01:12.596: INFO: (16) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 4.549797ms) May 8 22:01:12.596: INFO: (16) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:1080/proxy/: test<... (200; 4.617073ms) May 8 22:01:12.596: INFO: (16) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname1/proxy/: foo (200; 4.616982ms) May 8 22:01:12.596: INFO: (16) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname2/proxy/: bar (200; 4.672342ms) May 8 22:01:12.596: INFO: (16) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname1/proxy/: tls baz (200; 4.722128ms) May 8 22:01:12.596: INFO: (16) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname1/proxy/: foo (200; 4.784737ms) May 8 22:01:12.596: INFO: (16) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname2/proxy/: bar (200; 4.812817ms) May 8 22:01:12.597: INFO: (16) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 5.314158ms) May 8 22:01:12.600: INFO: (17) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 2.817998ms) May 8 22:01:12.600: INFO: (17) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:1080/proxy/: test<... (200; 2.97338ms) May 8 22:01:12.600: INFO: (17) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 3.18676ms) May 8 22:01:12.601: INFO: (17) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 4.147867ms) May 8 22:01:12.601: INFO: (17) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 4.353952ms) May 8 22:01:12.602: INFO: (17) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:1080/proxy/: ... (200; 4.460977ms) May 8 22:01:12.602: INFO: (17) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n/proxy/: test (200; 4.45508ms) May 8 22:01:12.602: INFO: (17) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:462/proxy/: tls qux (200; 4.46724ms) May 8 22:01:12.602: INFO: (17) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:460/proxy/: tls baz (200; 4.464516ms) May 8 22:01:12.602: INFO: (17) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: test (200; 3.689422ms) May 8 22:01:12.607: INFO: (18) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 3.675614ms) May 8 22:01:12.607: INFO: (18) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:1080/proxy/: ... (200; 3.679099ms) May 8 22:01:12.607: INFO: (18) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 4.370838ms) May 8 22:01:12.608: INFO: (18) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 4.70507ms) May 8 22:01:12.608: INFO: (18) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 4.902987ms) May 8 22:01:12.608: INFO: (18) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 5.031316ms) May 8 22:01:12.608: INFO: (18) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n:1080/proxy/: test<... (200; 5.186356ms) May 8 22:01:12.608: INFO: (18) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: ... (200; 7.319185ms) May 8 22:01:12.617: INFO: (19) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:160/proxy/: foo (200; 7.431713ms) May 8 22:01:12.617: INFO: (19) /api/v1/namespaces/proxy-4206/pods/https:proxy-service-lmfn5-f5b7n:443/proxy/: test<... (200; 8.488487ms) May 8 22:01:12.618: INFO: (19) /api/v1/namespaces/proxy-4206/pods/http:proxy-service-lmfn5-f5b7n:162/proxy/: bar (200; 8.465057ms) May 8 22:01:12.618: INFO: (19) /api/v1/namespaces/proxy-4206/pods/proxy-service-lmfn5-f5b7n/proxy/: test (200; 8.520235ms) May 8 22:01:12.620: INFO: (19) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname1/proxy/: foo (200; 10.078119ms) May 8 22:01:12.620: INFO: (19) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname2/proxy/: bar (200; 10.188553ms) May 8 22:01:12.620: INFO: (19) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname1/proxy/: tls baz (200; 10.433013ms) May 8 22:01:12.620: INFO: (19) /api/v1/namespaces/proxy-4206/services/https:proxy-service-lmfn5:tlsportname2/proxy/: tls qux (200; 10.684149ms) May 8 22:01:12.620: INFO: (19) /api/v1/namespaces/proxy-4206/services/proxy-service-lmfn5:portname2/proxy/: bar (200; 10.839835ms) May 8 22:01:12.620: INFO: (19) /api/v1/namespaces/proxy-4206/services/http:proxy-service-lmfn5:portname1/proxy/: foo (200; 10.878408ms) STEP: deleting ReplicationController proxy-service-lmfn5 in namespace proxy-4206, will wait for the garbage collector to delete the pods May 8 22:01:12.680: INFO: Deleting ReplicationController proxy-service-lmfn5 took: 7.125081ms May 8 22:01:12.980: INFO: Terminating ReplicationController proxy-service-lmfn5 pods took: 300.354554ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:01:15.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4206" for this suite. • [SLOW TEST:9.613 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":164,"skipped":2709,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:01:15.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 8 22:01:15.943: INFO: >>> kubeConfig: /root/.kube/config May 8 22:01:18.987: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:01:29.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2241" for this suite. • [SLOW TEST:13.617 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":165,"skipped":2723,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:01:29.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-473 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-473 May 8 22:01:29.598: INFO: Found 0 stateful pods, waiting for 1 May 8 22:01:39.602: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 8 22:01:39.646: INFO: Deleting all statefulset in ns statefulset-473 May 8 22:01:39.661: INFO: Scaling statefulset ss to 0 May 8 22:01:59.713: INFO: Waiting for statefulset status.replicas updated to 0 May 8 22:01:59.716: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:01:59.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-473" for this suite. • [SLOW TEST:30.230 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":166,"skipped":2751,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:01:59.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 22:02:00.754: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 22:02:02.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572120, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572120, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572120, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572120, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 22:02:05.952: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:02:06.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9920" for this suite. STEP: Destroying namespace "webhook-9920-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.570 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":167,"skipped":2758,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:02:06.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-58ff72f2-013e-44fc-98b3-edb3a12b23a2 STEP: Creating a pod to test consume configMaps May 8 22:02:06.510: INFO: Waiting up to 5m0s for pod "pod-configmaps-b54f1328-c3e6-49b9-86a6-6ec04d873ec6" in namespace "configmap-9729" to be "success or failure" May 8 22:02:06.512: INFO: Pod "pod-configmaps-b54f1328-c3e6-49b9-86a6-6ec04d873ec6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.924449ms May 8 22:02:08.552: INFO: Pod "pod-configmaps-b54f1328-c3e6-49b9-86a6-6ec04d873ec6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042199467s May 8 22:02:10.556: INFO: Pod "pod-configmaps-b54f1328-c3e6-49b9-86a6-6ec04d873ec6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046458313s May 8 22:02:12.560: INFO: Pod "pod-configmaps-b54f1328-c3e6-49b9-86a6-6ec04d873ec6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050249869s STEP: Saw pod success May 8 22:02:12.560: INFO: Pod "pod-configmaps-b54f1328-c3e6-49b9-86a6-6ec04d873ec6" satisfied condition "success or failure" May 8 22:02:12.563: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-b54f1328-c3e6-49b9-86a6-6ec04d873ec6 container configmap-volume-test: STEP: delete the pod May 8 22:02:12.591: INFO: Waiting for pod pod-configmaps-b54f1328-c3e6-49b9-86a6-6ec04d873ec6 to disappear May 8 22:02:12.627: INFO: Pod pod-configmaps-b54f1328-c3e6-49b9-86a6-6ec04d873ec6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:02:12.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9729" for this suite. • [SLOW TEST:6.324 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2768,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:02:12.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 22:02:12.709: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ec43ba7-2b24-4fee-beb1-b1b4d16ae2bc" in namespace "projected-8983" to be "success or failure" May 8 22:02:12.713: INFO: Pod "downwardapi-volume-6ec43ba7-2b24-4fee-beb1-b1b4d16ae2bc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.832864ms May 8 22:02:14.717: INFO: Pod "downwardapi-volume-6ec43ba7-2b24-4fee-beb1-b1b4d16ae2bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007808235s May 8 22:02:16.722: INFO: Pod "downwardapi-volume-6ec43ba7-2b24-4fee-beb1-b1b4d16ae2bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012238933s STEP: Saw pod success May 8 22:02:16.722: INFO: Pod "downwardapi-volume-6ec43ba7-2b24-4fee-beb1-b1b4d16ae2bc" satisfied condition "success or failure" May 8 22:02:16.725: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6ec43ba7-2b24-4fee-beb1-b1b4d16ae2bc container client-container: STEP: delete the pod May 8 22:02:16.759: INFO: Waiting for pod downwardapi-volume-6ec43ba7-2b24-4fee-beb1-b1b4d16ae2bc to disappear May 8 22:02:16.785: INFO: Pod downwardapi-volume-6ec43ba7-2b24-4fee-beb1-b1b4d16ae2bc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:02:16.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8983" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2775,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:02:16.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 8 22:02:16.905: INFO: Waiting up to 5m0s for pod "downward-api-b7afd356-cde0-470a-a8ee-dc7c745831be" in namespace "downward-api-2839" to be "success or failure" May 8 22:02:16.908: INFO: Pod "downward-api-b7afd356-cde0-470a-a8ee-dc7c745831be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.828303ms May 8 22:02:18.912: INFO: Pod "downward-api-b7afd356-cde0-470a-a8ee-dc7c745831be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007025273s May 8 22:02:20.917: INFO: Pod "downward-api-b7afd356-cde0-470a-a8ee-dc7c745831be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011681387s STEP: Saw pod success May 8 22:02:20.917: INFO: Pod "downward-api-b7afd356-cde0-470a-a8ee-dc7c745831be" satisfied condition "success or failure" May 8 22:02:20.920: INFO: Trying to get logs from node jerma-worker pod downward-api-b7afd356-cde0-470a-a8ee-dc7c745831be container dapi-container: STEP: delete the pod May 8 22:02:20.943: INFO: Waiting for pod downward-api-b7afd356-cde0-470a-a8ee-dc7c745831be to disappear May 8 22:02:20.947: INFO: Pod downward-api-b7afd356-cde0-470a-a8ee-dc7c745831be no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:02:20.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2839" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2791,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:02:20.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-a4258428-5098-4661-88f2-a7247b3a3b8e STEP: Creating a pod to test consume secrets May 8 22:02:21.111: INFO: Waiting up to 5m0s for pod "pod-secrets-69f8b574-e85f-4d13-8d4f-474db000f1d9" in namespace "secrets-8408" to be "success or failure" May 8 22:02:21.150: INFO: Pod "pod-secrets-69f8b574-e85f-4d13-8d4f-474db000f1d9": Phase="Pending", Reason="", readiness=false. Elapsed: 38.546658ms May 8 22:02:23.153: INFO: Pod "pod-secrets-69f8b574-e85f-4d13-8d4f-474db000f1d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041564261s May 8 22:02:25.157: INFO: Pod "pod-secrets-69f8b574-e85f-4d13-8d4f-474db000f1d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045996018s STEP: Saw pod success May 8 22:02:25.157: INFO: Pod "pod-secrets-69f8b574-e85f-4d13-8d4f-474db000f1d9" satisfied condition "success or failure" May 8 22:02:25.160: INFO: Trying to get logs from node jerma-worker pod pod-secrets-69f8b574-e85f-4d13-8d4f-474db000f1d9 container secret-volume-test: STEP: delete the pod May 8 22:02:25.208: INFO: Waiting for pod pod-secrets-69f8b574-e85f-4d13-8d4f-474db000f1d9 to disappear May 8 22:02:25.220: INFO: Pod pod-secrets-69f8b574-e85f-4d13-8d4f-474db000f1d9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:02:25.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8408" for this suite. STEP: Destroying namespace "secret-namespace-7732" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2836,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:02:25.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 8 22:02:25.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5907' May 8 22:02:25.447: INFO: stderr: "" May 8 22:02:25.447: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 8 22:02:25.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5907' May 8 22:02:28.176: INFO: stderr: "" May 8 22:02:28.177: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:02:28.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5907" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":172,"skipped":2846,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:02:28.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4697.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4697.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4697.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4697.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 22:02:34.349: INFO: DNS probes using dns-test-ae75feae-81c0-437a-a3ea-e0f4c29ca18b succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4697.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4697.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4697.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4697.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 22:02:40.487: INFO: File wheezy_udp@dns-test-service-3.dns-4697.svc.cluster.local from pod dns-4697/dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 22:02:40.490: INFO: File jessie_udp@dns-test-service-3.dns-4697.svc.cluster.local from pod dns-4697/dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 contains '' instead of 'bar.example.com.' May 8 22:02:40.490: INFO: Lookups using dns-4697/dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 failed for: [wheezy_udp@dns-test-service-3.dns-4697.svc.cluster.local jessie_udp@dns-test-service-3.dns-4697.svc.cluster.local] May 8 22:02:45.495: INFO: File wheezy_udp@dns-test-service-3.dns-4697.svc.cluster.local from pod dns-4697/dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 22:02:45.498: INFO: File jessie_udp@dns-test-service-3.dns-4697.svc.cluster.local from pod dns-4697/dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 22:02:45.498: INFO: Lookups using dns-4697/dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 failed for: [wheezy_udp@dns-test-service-3.dns-4697.svc.cluster.local jessie_udp@dns-test-service-3.dns-4697.svc.cluster.local] May 8 22:02:50.495: INFO: File wheezy_udp@dns-test-service-3.dns-4697.svc.cluster.local from pod dns-4697/dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 22:02:50.499: INFO: File jessie_udp@dns-test-service-3.dns-4697.svc.cluster.local from pod dns-4697/dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 22:02:50.499: INFO: Lookups using dns-4697/dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 failed for: [wheezy_udp@dns-test-service-3.dns-4697.svc.cluster.local jessie_udp@dns-test-service-3.dns-4697.svc.cluster.local] May 8 22:02:55.518: INFO: File wheezy_udp@dns-test-service-3.dns-4697.svc.cluster.local from pod dns-4697/dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 22:02:55.527: INFO: File jessie_udp@dns-test-service-3.dns-4697.svc.cluster.local from pod dns-4697/dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 22:02:55.527: INFO: Lookups using dns-4697/dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 failed for: [wheezy_udp@dns-test-service-3.dns-4697.svc.cluster.local jessie_udp@dns-test-service-3.dns-4697.svc.cluster.local] May 8 22:03:00.495: INFO: File wheezy_udp@dns-test-service-3.dns-4697.svc.cluster.local from pod dns-4697/dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 22:03:00.499: INFO: File jessie_udp@dns-test-service-3.dns-4697.svc.cluster.local from pod dns-4697/dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 22:03:00.499: INFO: Lookups using dns-4697/dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 failed for: [wheezy_udp@dns-test-service-3.dns-4697.svc.cluster.local jessie_udp@dns-test-service-3.dns-4697.svc.cluster.local] May 8 22:03:05.500: INFO: DNS probes using dns-test-1d010ae6-3c80-407f-b279-8be99b5a7a58 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4697.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4697.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4697.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4697.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 22:03:12.299: INFO: DNS probes using dns-test-98de07d7-ca22-40c4-926e-0015ff1725e9 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:03:12.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4697" for this suite. • [SLOW TEST:44.223 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":173,"skipped":2899,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:03:12.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:03:23.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-729" for this suite. • [SLOW TEST:11.276 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":174,"skipped":2915,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:03:23.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 22:03:23.764: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a22de040-44b9-40bb-8ad1-70cdd0604251" in namespace "projected-2343" to be "success or failure" May 8 22:03:23.767: INFO: Pod "downwardapi-volume-a22de040-44b9-40bb-8ad1-70cdd0604251": Phase="Pending", Reason="", readiness=false. Elapsed: 3.352706ms May 8 22:03:25.772: INFO: Pod "downwardapi-volume-a22de040-44b9-40bb-8ad1-70cdd0604251": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007422154s May 8 22:03:27.776: INFO: Pod "downwardapi-volume-a22de040-44b9-40bb-8ad1-70cdd0604251": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011866801s STEP: Saw pod success May 8 22:03:27.776: INFO: Pod "downwardapi-volume-a22de040-44b9-40bb-8ad1-70cdd0604251" satisfied condition "success or failure" May 8 22:03:27.779: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a22de040-44b9-40bb-8ad1-70cdd0604251 container client-container: STEP: delete the pod May 8 22:03:27.807: INFO: Waiting for pod downwardapi-volume-a22de040-44b9-40bb-8ad1-70cdd0604251 to disappear May 8 22:03:27.838: INFO: Pod downwardapi-volume-a22de040-44b9-40bb-8ad1-70cdd0604251 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:03:27.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2343" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2916,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:03:27.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:03:31.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1734" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2918,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:03:31.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-a1d41978-8f42-4d56-9a4e-3a975b94c5e3 STEP: Creating a pod to test consume configMaps May 8 22:03:32.084: INFO: Waiting up to 5m0s for pod "pod-configmaps-51f996de-4112-4995-85c5-b0537b35102a" in namespace "configmap-1114" to be "success or failure" May 8 22:03:32.106: INFO: Pod "pod-configmaps-51f996de-4112-4995-85c5-b0537b35102a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.546907ms May 8 22:03:34.110: INFO: Pod "pod-configmaps-51f996de-4112-4995-85c5-b0537b35102a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026433947s May 8 22:03:36.115: INFO: Pod "pod-configmaps-51f996de-4112-4995-85c5-b0537b35102a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030854151s STEP: Saw pod success May 8 22:03:36.115: INFO: Pod "pod-configmaps-51f996de-4112-4995-85c5-b0537b35102a" satisfied condition "success or failure" May 8 22:03:36.118: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-51f996de-4112-4995-85c5-b0537b35102a container configmap-volume-test: STEP: delete the pod May 8 22:03:36.137: INFO: Waiting for pod pod-configmaps-51f996de-4112-4995-85c5-b0537b35102a to disappear May 8 22:03:36.142: INFO: Pod pod-configmaps-51f996de-4112-4995-85c5-b0537b35102a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:03:36.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1114" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2935,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:03:36.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0508 22:03:46.236044 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 22:03:46.236: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:03:46.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-122" for this suite. • [SLOW TEST:10.093 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":178,"skipped":2937,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:03:46.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-2564306f-1b19-4177-a7c7-4c7b8c87f755 STEP: Creating a pod to test consume secrets May 8 22:03:46.348: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-80302627-c6fd-43a1-876d-ef1f5c8f659e" in namespace "projected-4468" to be "success or failure" May 8 22:03:46.355: INFO: Pod "pod-projected-secrets-80302627-c6fd-43a1-876d-ef1f5c8f659e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.236145ms May 8 22:03:48.359: INFO: Pod "pod-projected-secrets-80302627-c6fd-43a1-876d-ef1f5c8f659e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011669745s May 8 22:03:50.364: INFO: Pod "pod-projected-secrets-80302627-c6fd-43a1-876d-ef1f5c8f659e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016388705s STEP: Saw pod success May 8 22:03:50.364: INFO: Pod "pod-projected-secrets-80302627-c6fd-43a1-876d-ef1f5c8f659e" satisfied condition "success or failure" May 8 22:03:50.368: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-80302627-c6fd-43a1-876d-ef1f5c8f659e container secret-volume-test: STEP: delete the pod May 8 22:03:50.404: INFO: Waiting for pod pod-projected-secrets-80302627-c6fd-43a1-876d-ef1f5c8f659e to disappear May 8 22:03:50.417: INFO: Pod pod-projected-secrets-80302627-c6fd-43a1-876d-ef1f5c8f659e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:03:50.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4468" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2940,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:03:50.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 8 22:03:50.525: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:03:56.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4243" for this suite. • [SLOW TEST:5.785 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":180,"skipped":2942,"failed":0} S ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:03:56.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 22:03:56.295: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a1514fc9-99af-4b4a-a473-b44a2bbe1b40" in namespace "downward-api-8820" to be "success or failure" May 8 22:03:56.299: INFO: Pod "downwardapi-volume-a1514fc9-99af-4b4a-a473-b44a2bbe1b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105513ms May 8 22:03:58.303: INFO: Pod "downwardapi-volume-a1514fc9-99af-4b4a-a473-b44a2bbe1b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00794612s May 8 22:04:00.313: INFO: Pod "downwardapi-volume-a1514fc9-99af-4b4a-a473-b44a2bbe1b40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017834173s STEP: Saw pod success May 8 22:04:00.313: INFO: Pod "downwardapi-volume-a1514fc9-99af-4b4a-a473-b44a2bbe1b40" satisfied condition "success or failure" May 8 22:04:00.315: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a1514fc9-99af-4b4a-a473-b44a2bbe1b40 container client-container: STEP: delete the pod May 8 22:04:00.333: INFO: Waiting for pod downwardapi-volume-a1514fc9-99af-4b4a-a473-b44a2bbe1b40 to disappear May 8 22:04:00.337: INFO: Pod downwardapi-volume-a1514fc9-99af-4b4a-a473-b44a2bbe1b40 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:04:00.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8820" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2943,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:04:00.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:04:00.524: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 8 22:04:05.554: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 8 22:04:05.554: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 8 22:04:07.572: INFO: Creating deployment "test-rollover-deployment" May 8 22:04:07.649: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 8 22:04:09.656: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 8 22:04:09.664: INFO: Ensure that both replica sets have 1 created replica May 8 22:04:09.670: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 8 22:04:09.675: INFO: Updating deployment test-rollover-deployment May 8 22:04:09.675: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 8 22:04:11.697: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 8 22:04:11.703: INFO: Make sure deployment "test-rollover-deployment" is complete May 8 22:04:11.709: INFO: all replica sets need to contain the pod-template-hash label May 8 22:04:11.709: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572249, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 22:04:13.715: INFO: all replica sets need to contain the pod-template-hash label May 8 22:04:13.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572253, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 22:04:15.714: INFO: all replica sets need to contain the pod-template-hash label May 8 22:04:15.714: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572253, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 22:04:17.715: INFO: all replica sets need to contain the pod-template-hash label May 8 22:04:17.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572253, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 22:04:19.717: INFO: all replica sets need to contain the pod-template-hash label May 8 22:04:19.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572253, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 22:04:21.715: INFO: all replica sets need to contain the pod-template-hash label May 8 22:04:21.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572253, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 22:04:23.986: INFO: May 8 22:04:23.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572263, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 22:04:25.716: INFO: May 8 22:04:25.716: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 8 22:04:25.723: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9975 /apis/apps/v1/namespaces/deployment-9975/deployments/test-rollover-deployment a8008e0a-3c9a-4dd2-af92-528767400105 14545612 2 2020-05-08 22:04:07 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f75ba8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-08 22:04:07 +0000 UTC,LastTransitionTime:2020-05-08 22:04:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-08 22:04:23 +0000 UTC,LastTransitionTime:2020-05-08 22:04:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 8 22:04:25.727: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-9975 /apis/apps/v1/namespaces/deployment-9975/replicasets/test-rollover-deployment-574d6dfbff 5c0b0fd8-12bb-4ebd-8d5e-8c3bef5147b6 14545599 2 2020-05-08 22:04:09 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment a8008e0a-3c9a-4dd2-af92-528767400105 0xc002410007 0xc002410008}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0024100d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 8 22:04:25.727: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 8 22:04:25.727: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9975 /apis/apps/v1/namespaces/deployment-9975/replicasets/test-rollover-controller 6334392a-1190-481d-9a7d-8f5d81e4207f 14545611 2 2020-05-08 22:04:00 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment a8008e0a-3c9a-4dd2-af92-528767400105 0xc002f75f1f 0xc002f75f30}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002f75f98 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 8 22:04:25.727: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-9975 /apis/apps/v1/namespaces/deployment-9975/replicasets/test-rollover-deployment-f6c94f66c d7e79ea3-7853-4864-84e0-8fb0e82c2e00 14545544 2 2020-05-08 22:04:07 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment a8008e0a-3c9a-4dd2-af92-528767400105 0xc002410230 0xc002410231}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002410318 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 8 22:04:25.730: INFO: Pod "test-rollover-deployment-574d6dfbff-clfb9" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-clfb9 test-rollover-deployment-574d6dfbff- deployment-9975 /api/v1/namespaces/deployment-9975/pods/test-rollover-deployment-574d6dfbff-clfb9 4d960d5a-265e-4ebc-9e16-549b2b0cb10b 14545564 0 2020-05-08 22:04:09 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 5c0b0fd8-12bb-4ebd-8d5e-8c3bef5147b6 0xc002eae317 0xc002eae318}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6jg9x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6jg9x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6jg9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:04:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:04:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:04:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.108,StartTime:2020-05-08 22:04:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 22:04:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://1038febc4784aa028e4eac35e7b69124efbea7779defaf60392daae7edfcf736,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.108,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:04:25.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9975" for this suite. • [SLOW TEST:25.396 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":182,"skipped":2980,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:04:25.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 22:04:25.813: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae15c56a-0f9d-4c36-9c8a-19fd9b8fe32a" in namespace "projected-9263" to be "success or failure" May 8 22:04:25.824: INFO: Pod "downwardapi-volume-ae15c56a-0f9d-4c36-9c8a-19fd9b8fe32a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.986135ms May 8 22:04:27.827: INFO: Pod "downwardapi-volume-ae15c56a-0f9d-4c36-9c8a-19fd9b8fe32a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013107521s May 8 22:04:29.842: INFO: Pod "downwardapi-volume-ae15c56a-0f9d-4c36-9c8a-19fd9b8fe32a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028640266s STEP: Saw pod success May 8 22:04:29.842: INFO: Pod "downwardapi-volume-ae15c56a-0f9d-4c36-9c8a-19fd9b8fe32a" satisfied condition "success or failure" May 8 22:04:29.845: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ae15c56a-0f9d-4c36-9c8a-19fd9b8fe32a container client-container: STEP: delete the pod May 8 22:04:29.886: INFO: Waiting for pod downwardapi-volume-ae15c56a-0f9d-4c36-9c8a-19fd9b8fe32a to disappear May 8 22:04:29.896: INFO: Pod downwardapi-volume-ae15c56a-0f9d-4c36-9c8a-19fd9b8fe32a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:04:29.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9263" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2983,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:04:29.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8437 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8437 I0508 22:04:30.096235 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8437, replica count: 2 I0508 22:04:33.146737 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 22:04:36.146956 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 8 22:04:36.147: INFO: Creating new exec pod May 8 22:04:41.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8437 execpodbjlvj -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 8 22:04:44.111: INFO: stderr: "I0508 22:04:44.012902 3012 log.go:172] (0xc0004af290) (0xc0009801e0) Create stream\nI0508 22:04:44.012942 3012 log.go:172] (0xc0004af290) (0xc0009801e0) Stream added, broadcasting: 1\nI0508 22:04:44.016292 3012 log.go:172] (0xc0004af290) Reply frame received for 1\nI0508 22:04:44.016356 3012 log.go:172] (0xc0004af290) (0xc000980280) Create stream\nI0508 22:04:44.016394 3012 log.go:172] (0xc0004af290) (0xc000980280) Stream added, broadcasting: 3\nI0508 22:04:44.017731 3012 log.go:172] (0xc0004af290) Reply frame received for 3\nI0508 22:04:44.017781 3012 log.go:172] (0xc0004af290) (0xc000635d60) Create stream\nI0508 22:04:44.017794 3012 log.go:172] (0xc0004af290) (0xc000635d60) Stream added, broadcasting: 5\nI0508 22:04:44.018790 3012 log.go:172] (0xc0004af290) Reply frame received for 5\nI0508 22:04:44.102584 3012 log.go:172] (0xc0004af290) Data frame received for 5\nI0508 22:04:44.102620 3012 log.go:172] (0xc000635d60) (5) Data frame handling\nI0508 22:04:44.102642 3012 log.go:172] (0xc000635d60) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0508 22:04:44.102806 3012 log.go:172] (0xc0004af290) Data frame received for 5\nI0508 22:04:44.102842 3012 log.go:172] (0xc000635d60) (5) Data frame handling\nI0508 22:04:44.102867 3012 log.go:172] (0xc000635d60) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0508 22:04:44.103402 3012 log.go:172] (0xc0004af290) Data frame received for 5\nI0508 22:04:44.103431 3012 log.go:172] (0xc000635d60) (5) Data frame handling\nI0508 22:04:44.103485 3012 log.go:172] (0xc0004af290) Data frame received for 3\nI0508 22:04:44.103509 3012 log.go:172] (0xc000980280) (3) Data frame handling\nI0508 22:04:44.105972 3012 log.go:172] (0xc0004af290) Data frame received for 1\nI0508 22:04:44.106016 3012 log.go:172] (0xc0009801e0) (1) Data frame handling\nI0508 22:04:44.106041 3012 log.go:172] (0xc0009801e0) (1) Data frame sent\nI0508 22:04:44.106282 3012 log.go:172] (0xc0004af290) (0xc0009801e0) Stream removed, broadcasting: 1\nI0508 22:04:44.106532 3012 log.go:172] (0xc0004af290) Go away received\nI0508 22:04:44.106865 3012 log.go:172] (0xc0004af290) (0xc0009801e0) Stream removed, broadcasting: 1\nI0508 22:04:44.106903 3012 log.go:172] (0xc0004af290) (0xc000980280) Stream removed, broadcasting: 3\nI0508 22:04:44.106937 3012 log.go:172] (0xc0004af290) (0xc000635d60) Stream removed, broadcasting: 5\n" May 8 22:04:44.111: INFO: stdout: "" May 8 22:04:44.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8437 execpodbjlvj -- /bin/sh -x -c nc -zv -t -w 2 10.101.201.206 80' May 8 22:04:44.320: INFO: stderr: "I0508 22:04:44.245293 3046 log.go:172] (0xc0000f4370) (0xc00029d5e0) Create stream\nI0508 22:04:44.245352 3046 log.go:172] (0xc0000f4370) (0xc00029d5e0) Stream added, broadcasting: 1\nI0508 22:04:44.247756 3046 log.go:172] (0xc0000f4370) Reply frame received for 1\nI0508 22:04:44.247807 3046 log.go:172] (0xc0000f4370) (0xc00072fb80) Create stream\nI0508 22:04:44.247829 3046 log.go:172] (0xc0000f4370) (0xc00072fb80) Stream added, broadcasting: 3\nI0508 22:04:44.248773 3046 log.go:172] (0xc0000f4370) Reply frame received for 3\nI0508 22:04:44.248810 3046 log.go:172] (0xc0000f4370) (0xc000ae6000) Create stream\nI0508 22:04:44.248821 3046 log.go:172] (0xc0000f4370) (0xc000ae6000) Stream added, broadcasting: 5\nI0508 22:04:44.249981 3046 log.go:172] (0xc0000f4370) Reply frame received for 5\nI0508 22:04:44.312989 3046 log.go:172] (0xc0000f4370) Data frame received for 3\nI0508 22:04:44.313038 3046 log.go:172] (0xc00072fb80) (3) Data frame handling\nI0508 22:04:44.313064 3046 log.go:172] (0xc0000f4370) Data frame received for 5\nI0508 22:04:44.313073 3046 log.go:172] (0xc000ae6000) (5) Data frame handling\nI0508 22:04:44.313082 3046 log.go:172] (0xc000ae6000) (5) Data frame sent\nI0508 22:04:44.313096 3046 log.go:172] (0xc0000f4370) Data frame received for 5\nI0508 22:04:44.313252 3046 log.go:172] (0xc000ae6000) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.201.206 80\nConnection to 10.101.201.206 80 port [tcp/http] succeeded!\nI0508 22:04:44.315065 3046 log.go:172] (0xc0000f4370) Data frame received for 1\nI0508 22:04:44.315092 3046 log.go:172] (0xc00029d5e0) (1) Data frame handling\nI0508 22:04:44.315106 3046 log.go:172] (0xc00029d5e0) (1) Data frame sent\nI0508 22:04:44.315119 3046 log.go:172] (0xc0000f4370) (0xc00029d5e0) Stream removed, broadcasting: 1\nI0508 22:04:44.315291 3046 log.go:172] (0xc0000f4370) Go away received\nI0508 22:04:44.315385 3046 log.go:172] (0xc0000f4370) (0xc00029d5e0) Stream removed, broadcasting: 1\nI0508 22:04:44.315406 3046 log.go:172] (0xc0000f4370) (0xc00072fb80) Stream removed, broadcasting: 3\nI0508 22:04:44.315417 3046 log.go:172] (0xc0000f4370) (0xc000ae6000) Stream removed, broadcasting: 5\n" May 8 22:04:44.320: INFO: stdout: "" May 8 22:04:44.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8437 execpodbjlvj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31074' May 8 22:04:44.512: INFO: stderr: "I0508 22:04:44.433912 3068 log.go:172] (0xc00094f1e0) (0xc0009223c0) Create stream\nI0508 22:04:44.433977 3068 log.go:172] (0xc00094f1e0) (0xc0009223c0) Stream added, broadcasting: 1\nI0508 22:04:44.438264 3068 log.go:172] (0xc00094f1e0) Reply frame received for 1\nI0508 22:04:44.438306 3068 log.go:172] (0xc00094f1e0) (0xc0008e8000) Create stream\nI0508 22:04:44.438319 3068 log.go:172] (0xc00094f1e0) (0xc0008e8000) Stream added, broadcasting: 3\nI0508 22:04:44.439158 3068 log.go:172] (0xc00094f1e0) Reply frame received for 3\nI0508 22:04:44.439194 3068 log.go:172] (0xc00094f1e0) (0xc000922000) Create stream\nI0508 22:04:44.439203 3068 log.go:172] (0xc00094f1e0) (0xc000922000) Stream added, broadcasting: 5\nI0508 22:04:44.440161 3068 log.go:172] (0xc00094f1e0) Reply frame received for 5\nI0508 22:04:44.505397 3068 log.go:172] (0xc00094f1e0) Data frame received for 5\nI0508 22:04:44.505431 3068 log.go:172] (0xc000922000) (5) Data frame handling\nI0508 22:04:44.505451 3068 log.go:172] (0xc000922000) (5) Data frame sent\nI0508 22:04:44.505462 3068 log.go:172] (0xc00094f1e0) Data frame received for 5\nI0508 22:04:44.505471 3068 log.go:172] (0xc000922000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31074\nConnection to 172.17.0.10 31074 port [tcp/31074] succeeded!\nI0508 22:04:44.505500 3068 log.go:172] (0xc000922000) (5) Data frame sent\nI0508 22:04:44.506173 3068 log.go:172] (0xc00094f1e0) Data frame received for 3\nI0508 22:04:44.506212 3068 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0508 22:04:44.506294 3068 log.go:172] (0xc00094f1e0) Data frame received for 5\nI0508 22:04:44.506312 3068 log.go:172] (0xc000922000) (5) Data frame handling\nI0508 22:04:44.507891 3068 log.go:172] (0xc00094f1e0) Data frame received for 1\nI0508 22:04:44.507931 3068 log.go:172] (0xc0009223c0) (1) Data frame handling\nI0508 22:04:44.507960 3068 log.go:172] (0xc0009223c0) (1) Data frame sent\nI0508 22:04:44.507984 3068 log.go:172] (0xc00094f1e0) (0xc0009223c0) Stream removed, broadcasting: 1\nI0508 22:04:44.508001 3068 log.go:172] (0xc00094f1e0) Go away received\nI0508 22:04:44.508381 3068 log.go:172] (0xc00094f1e0) (0xc0009223c0) Stream removed, broadcasting: 1\nI0508 22:04:44.508400 3068 log.go:172] (0xc00094f1e0) (0xc0008e8000) Stream removed, broadcasting: 3\nI0508 22:04:44.508409 3068 log.go:172] (0xc00094f1e0) (0xc000922000) Stream removed, broadcasting: 5\n" May 8 22:04:44.513: INFO: stdout: "" May 8 22:04:44.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8437 execpodbjlvj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31074' May 8 22:04:44.732: INFO: stderr: "I0508 22:04:44.654958 3088 log.go:172] (0xc00092a000) (0xc000908000) Create stream\nI0508 22:04:44.655028 3088 log.go:172] (0xc00092a000) (0xc000908000) Stream added, broadcasting: 1\nI0508 22:04:44.659171 3088 log.go:172] (0xc00092a000) Reply frame received for 1\nI0508 22:04:44.659207 3088 log.go:172] (0xc00092a000) (0xc00041e320) Create stream\nI0508 22:04:44.659216 3088 log.go:172] (0xc00092a000) (0xc00041e320) Stream added, broadcasting: 3\nI0508 22:04:44.660227 3088 log.go:172] (0xc00092a000) Reply frame received for 3\nI0508 22:04:44.660265 3088 log.go:172] (0xc00092a000) (0xc0009080a0) Create stream\nI0508 22:04:44.660285 3088 log.go:172] (0xc00092a000) (0xc0009080a0) Stream added, broadcasting: 5\nI0508 22:04:44.661511 3088 log.go:172] (0xc00092a000) Reply frame received for 5\nI0508 22:04:44.723962 3088 log.go:172] (0xc00092a000) Data frame received for 5\nI0508 22:04:44.724013 3088 log.go:172] (0xc0009080a0) (5) Data frame handling\nI0508 22:04:44.724042 3088 log.go:172] (0xc0009080a0) (5) Data frame sent\nI0508 22:04:44.724054 3088 log.go:172] (0xc00092a000) Data frame received for 5\nI0508 22:04:44.724063 3088 log.go:172] (0xc0009080a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31074\nConnection to 172.17.0.8 31074 port [tcp/31074] succeeded!\nI0508 22:04:44.724107 3088 log.go:172] (0xc0009080a0) (5) Data frame sent\nI0508 22:04:44.724366 3088 log.go:172] (0xc00092a000) Data frame received for 3\nI0508 22:04:44.724403 3088 log.go:172] (0xc00041e320) (3) Data frame handling\nI0508 22:04:44.724635 3088 log.go:172] (0xc00092a000) Data frame received for 5\nI0508 22:04:44.724671 3088 log.go:172] (0xc0009080a0) (5) Data frame handling\nI0508 22:04:44.727215 3088 log.go:172] (0xc00092a000) Data frame received for 1\nI0508 22:04:44.727251 3088 log.go:172] (0xc000908000) (1) Data frame handling\nI0508 22:04:44.727274 3088 log.go:172] (0xc000908000) (1) Data frame sent\nI0508 22:04:44.727291 3088 log.go:172] (0xc00092a000) (0xc000908000) Stream removed, broadcasting: 1\nI0508 22:04:44.727309 3088 log.go:172] (0xc00092a000) Go away received\nI0508 22:04:44.727725 3088 log.go:172] (0xc00092a000) (0xc000908000) Stream removed, broadcasting: 1\nI0508 22:04:44.727749 3088 log.go:172] (0xc00092a000) (0xc00041e320) Stream removed, broadcasting: 3\nI0508 22:04:44.727761 3088 log.go:172] (0xc00092a000) (0xc0009080a0) Stream removed, broadcasting: 5\n" May 8 22:04:44.732: INFO: stdout: "" May 8 22:04:44.732: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:04:44.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8437" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.874 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":184,"skipped":2985,"failed":0} [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:04:44.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:04:49.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5971" for this suite. • [SLOW TEST:5.148 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":185,"skipped":2985,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:04:49.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-7813/configmap-test-532f9056-2bd0-44d9-a739-1375addfb214 STEP: Creating a pod to test consume configMaps May 8 22:04:50.108: INFO: Waiting up to 5m0s for pod "pod-configmaps-18d542bf-4bc4-4431-b605-64db7f90dd5e" in namespace "configmap-7813" to be "success or failure" May 8 22:04:50.141: INFO: Pod "pod-configmaps-18d542bf-4bc4-4431-b605-64db7f90dd5e": Phase="Pending", Reason="", readiness=false. Elapsed: 32.932078ms May 8 22:04:52.146: INFO: Pod "pod-configmaps-18d542bf-4bc4-4431-b605-64db7f90dd5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037057411s May 8 22:04:54.148: INFO: Pod "pod-configmaps-18d542bf-4bc4-4431-b605-64db7f90dd5e": Phase="Running", Reason="", readiness=true. Elapsed: 4.039737877s May 8 22:04:56.152: INFO: Pod "pod-configmaps-18d542bf-4bc4-4431-b605-64db7f90dd5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043685837s STEP: Saw pod success May 8 22:04:56.152: INFO: Pod "pod-configmaps-18d542bf-4bc4-4431-b605-64db7f90dd5e" satisfied condition "success or failure" May 8 22:04:56.155: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-18d542bf-4bc4-4431-b605-64db7f90dd5e container env-test: STEP: delete the pod May 8 22:04:56.175: INFO: Waiting for pod pod-configmaps-18d542bf-4bc4-4431-b605-64db7f90dd5e to disappear May 8 22:04:56.195: INFO: Pod pod-configmaps-18d542bf-4bc4-4431-b605-64db7f90dd5e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:04:56.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7813" for this suite. • [SLOW TEST:6.272 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3007,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:04:56.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-10c9b99f-38d5-4fab-9420-0f0f53fa6a91 STEP: Creating a pod to test consume secrets May 8 22:04:56.330: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7b0ef6f4-1ebd-45e0-aefe-fe3027f7d334" in namespace "projected-5477" to be "success or failure" May 8 22:04:56.368: INFO: Pod "pod-projected-secrets-7b0ef6f4-1ebd-45e0-aefe-fe3027f7d334": Phase="Pending", Reason="", readiness=false. Elapsed: 37.057329ms May 8 22:04:58.372: INFO: Pod "pod-projected-secrets-7b0ef6f4-1ebd-45e0-aefe-fe3027f7d334": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041649083s May 8 22:05:00.400: INFO: Pod "pod-projected-secrets-7b0ef6f4-1ebd-45e0-aefe-fe3027f7d334": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069731908s STEP: Saw pod success May 8 22:05:00.400: INFO: Pod "pod-projected-secrets-7b0ef6f4-1ebd-45e0-aefe-fe3027f7d334" satisfied condition "success or failure" May 8 22:05:00.403: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-7b0ef6f4-1ebd-45e0-aefe-fe3027f7d334 container projected-secret-volume-test: STEP: delete the pod May 8 22:05:00.442: INFO: Waiting for pod pod-projected-secrets-7b0ef6f4-1ebd-45e0-aefe-fe3027f7d334 to disappear May 8 22:05:00.469: INFO: Pod pod-projected-secrets-7b0ef6f4-1ebd-45e0-aefe-fe3027f7d334 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:05:00.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5477" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3010,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:05:00.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 8 22:05:00.533: INFO: Waiting up to 5m0s for pod "pod-b1d43293-39ba-467c-9da2-90140e7e8404" in namespace "emptydir-2066" to be "success or failure" May 8 22:05:00.538: INFO: Pod "pod-b1d43293-39ba-467c-9da2-90140e7e8404": Phase="Pending", Reason="", readiness=false. Elapsed: 4.249322ms May 8 22:05:02.609: INFO: Pod "pod-b1d43293-39ba-467c-9da2-90140e7e8404": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075155684s May 8 22:05:04.613: INFO: Pod "pod-b1d43293-39ba-467c-9da2-90140e7e8404": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079609349s STEP: Saw pod success May 8 22:05:04.613: INFO: Pod "pod-b1d43293-39ba-467c-9da2-90140e7e8404" satisfied condition "success or failure" May 8 22:05:04.616: INFO: Trying to get logs from node jerma-worker2 pod pod-b1d43293-39ba-467c-9da2-90140e7e8404 container test-container: STEP: delete the pod May 8 22:05:04.635: INFO: Waiting for pod pod-b1d43293-39ba-467c-9da2-90140e7e8404 to disappear May 8 22:05:04.695: INFO: Pod pod-b1d43293-39ba-467c-9da2-90140e7e8404 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:05:04.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2066" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3024,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:05:04.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 8 22:05:05.417: INFO: Pod name wrapped-volume-race-d96ad1ab-3c0e-42e7-8bab-b23ae9b05ed0: Found 0 pods out of 5 May 8 22:05:10.833: INFO: Pod name wrapped-volume-race-d96ad1ab-3c0e-42e7-8bab-b23ae9b05ed0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d96ad1ab-3c0e-42e7-8bab-b23ae9b05ed0 in namespace emptydir-wrapper-1190, will wait for the garbage collector to delete the pods May 8 22:05:23.278: INFO: Deleting ReplicationController wrapped-volume-race-d96ad1ab-3c0e-42e7-8bab-b23ae9b05ed0 took: 15.019339ms May 8 22:05:23.678: INFO: Terminating ReplicationController wrapped-volume-race-d96ad1ab-3c0e-42e7-8bab-b23ae9b05ed0 pods took: 400.300665ms STEP: Creating RC which spawns configmap-volume pods May 8 22:05:31.114: INFO: Pod name wrapped-volume-race-78f1a958-2177-4eae-b2bb-e1d2be70cff9: Found 0 pods out of 5 May 8 22:05:36.122: INFO: Pod name wrapped-volume-race-78f1a958-2177-4eae-b2bb-e1d2be70cff9: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-78f1a958-2177-4eae-b2bb-e1d2be70cff9 in namespace emptydir-wrapper-1190, will wait for the garbage collector to delete the pods May 8 22:05:50.212: INFO: Deleting ReplicationController wrapped-volume-race-78f1a958-2177-4eae-b2bb-e1d2be70cff9 took: 7.198752ms May 8 22:05:50.612: INFO: Terminating ReplicationController wrapped-volume-race-78f1a958-2177-4eae-b2bb-e1d2be70cff9 pods took: 400.252767ms STEP: Creating RC which spawns configmap-volume pods May 8 22:06:00.356: INFO: Pod name wrapped-volume-race-5f41588d-292a-484e-bac1-3365fb8e0178: Found 0 pods out of 5 May 8 22:06:05.364: INFO: Pod name wrapped-volume-race-5f41588d-292a-484e-bac1-3365fb8e0178: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5f41588d-292a-484e-bac1-3365fb8e0178 in namespace emptydir-wrapper-1190, will wait for the garbage collector to delete the pods May 8 22:06:18.279: INFO: Deleting ReplicationController wrapped-volume-race-5f41588d-292a-484e-bac1-3365fb8e0178 took: 6.195428ms May 8 22:06:18.580: INFO: Terminating ReplicationController wrapped-volume-race-5f41588d-292a-484e-bac1-3365fb8e0178 pods took: 300.229769ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:06:30.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1190" for this suite. • [SLOW TEST:85.446 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":189,"skipped":3025,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:06:30.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 22:06:30.899: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 22:06:32.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572390, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572390, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572391, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572390, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 22:06:35.964: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:06:35.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1214-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:06:37.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5001" for this suite. STEP: Destroying namespace "webhook-5001-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.355 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":190,"skipped":3029,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:06:37.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 8 22:06:37.614: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 22:06:37.643: INFO: Waiting for terminating namespaces to be deleted... May 8 22:06:37.647: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 8 22:06:37.717: INFO: sample-webhook-deployment-5f65f8c764-smzv9 from webhook-5001 started at 2020-05-08 22:06:30 +0000 UTC (1 container statuses recorded) May 8 22:06:37.717: INFO: Container sample-webhook ready: true, restart count 0 May 8 22:06:37.717: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 22:06:37.717: INFO: Container kindnet-cni ready: true, restart count 0 May 8 22:06:37.717: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 22:06:37.717: INFO: Container kube-proxy ready: true, restart count 0 May 8 22:06:37.717: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 8 22:06:37.731: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 22:06:37.731: INFO: Container kube-proxy ready: true, restart count 0 May 8 22:06:37.731: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 8 22:06:37.731: INFO: Container kube-hunter ready: false, restart count 0 May 8 22:06:37.731: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 22:06:37.731: INFO: Container kindnet-cni ready: true, restart count 0 May 8 22:06:37.731: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 8 22:06:37.731: INFO: Container kube-bench ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160d2ce32ed00347], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:06:38.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8187" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":191,"skipped":3033,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:06:38.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:06:43.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1354" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3052,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:06:43.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 8 22:06:43.432: INFO: PodSpec: initContainers in spec.initContainers May 8 22:07:33.855: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fa010c68-823e-4e70-9f15-30075a81c2b3", GenerateName:"", Namespace:"init-container-9179", SelfLink:"/api/v1/namespaces/init-container-9179/pods/pod-init-fa010c68-823e-4e70-9f15-30075a81c2b3", UID:"696ed341-9e63-42d8-9a2d-608714d203c0", ResourceVersion:"14547374", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724572403, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"432807032"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6t775", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0034cabc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6t775", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6t775", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6t775", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002f75d98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0020b08a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f75e20)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f75e40)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002f75e48), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002f75e4c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572403, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572403, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572403, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572403, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.10", PodIP:"10.244.1.119", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.119"}}, StartTime:(*v1.Time)(0xc004dee7c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc004dee8e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00175ec40)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://f06d53d0ace7a8d878258f057067e8b50d2210a29ef3300951ce9ea8ddffd745", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004dee900), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004dee8c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002f75ecf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:07:33.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9179" for this suite. • [SLOW TEST:50.649 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":193,"skipped":3097,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:07:33.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 8 22:07:34.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 8 22:07:34.313: INFO: stderr: "" May 8 22:07:34.313: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:07:34.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7327" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":194,"skipped":3100,"failed":0} SS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:07:34.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 8 22:07:40.959: INFO: Successfully updated pod "adopt-release-kb5m7" STEP: Checking that the Job readopts the Pod May 8 22:07:40.959: INFO: Waiting up to 15m0s for pod "adopt-release-kb5m7" in namespace "job-3781" to be "adopted" May 8 22:07:40.967: INFO: Pod "adopt-release-kb5m7": Phase="Running", Reason="", readiness=true. Elapsed: 8.171728ms May 8 22:07:42.982: INFO: Pod "adopt-release-kb5m7": Phase="Running", Reason="", readiness=true. Elapsed: 2.023182231s May 8 22:07:42.982: INFO: Pod "adopt-release-kb5m7" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 8 22:07:43.492: INFO: Successfully updated pod "adopt-release-kb5m7" STEP: Checking that the Job releases the Pod May 8 22:07:43.492: INFO: Waiting up to 15m0s for pod "adopt-release-kb5m7" in namespace "job-3781" to be "released" May 8 22:07:43.500: INFO: Pod "adopt-release-kb5m7": Phase="Running", Reason="", readiness=true. Elapsed: 8.295509ms May 8 22:07:45.683: INFO: Pod "adopt-release-kb5m7": Phase="Running", Reason="", readiness=true. Elapsed: 2.191448846s May 8 22:07:45.683: INFO: Pod "adopt-release-kb5m7" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:07:45.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3781" for this suite. • [SLOW TEST:11.367 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":195,"skipped":3102,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:07:45.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:07:52.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6772" for this suite. STEP: Destroying namespace "nsdeletetest-8681" for this suite. May 8 22:07:52.458: INFO: Namespace nsdeletetest-8681 was already deleted STEP: Destroying namespace "nsdeletetest-4482" for this suite. • [SLOW TEST:6.771 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":196,"skipped":3116,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:07:52.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-e02752e3-60d6-4f0a-a90d-fb8a59399d48 STEP: Creating a pod to test consume secrets May 8 22:07:52.534: INFO: Waiting up to 5m0s for pod "pod-secrets-2f972353-3016-403c-b7a3-bd7576cf30ec" in namespace "secrets-9154" to be "success or failure" May 8 22:07:52.581: INFO: Pod "pod-secrets-2f972353-3016-403c-b7a3-bd7576cf30ec": Phase="Pending", Reason="", readiness=false. Elapsed: 47.712028ms May 8 22:07:54.585: INFO: Pod "pod-secrets-2f972353-3016-403c-b7a3-bd7576cf30ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051706478s May 8 22:07:56.590: INFO: Pod "pod-secrets-2f972353-3016-403c-b7a3-bd7576cf30ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056122144s STEP: Saw pod success May 8 22:07:56.590: INFO: Pod "pod-secrets-2f972353-3016-403c-b7a3-bd7576cf30ec" satisfied condition "success or failure" May 8 22:07:56.593: INFO: Trying to get logs from node jerma-worker pod pod-secrets-2f972353-3016-403c-b7a3-bd7576cf30ec container secret-volume-test: STEP: delete the pod May 8 22:07:56.634: INFO: Waiting for pod pod-secrets-2f972353-3016-403c-b7a3-bd7576cf30ec to disappear May 8 22:07:56.645: INFO: Pod pod-secrets-2f972353-3016-403c-b7a3-bd7576cf30ec no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:07:56.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9154" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3137,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:07:56.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 8 22:07:56.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9859' May 8 22:07:57.026: INFO: stderr: "" May 8 22:07:57.026: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 22:07:57.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9859' May 8 22:07:57.171: INFO: stderr: "" May 8 22:07:57.171: INFO: stdout: "update-demo-nautilus-czm6d update-demo-nautilus-zdlgr " May 8 22:07:57.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-czm6d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9859' May 8 22:07:57.263: INFO: stderr: "" May 8 22:07:57.263: INFO: stdout: "" May 8 22:07:57.263: INFO: update-demo-nautilus-czm6d is created but not running May 8 22:08:02.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9859' May 8 22:08:02.381: INFO: stderr: "" May 8 22:08:02.381: INFO: stdout: "update-demo-nautilus-czm6d update-demo-nautilus-zdlgr " May 8 22:08:02.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-czm6d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9859' May 8 22:08:02.480: INFO: stderr: "" May 8 22:08:02.480: INFO: stdout: "true" May 8 22:08:02.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-czm6d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9859' May 8 22:08:02.576: INFO: stderr: "" May 8 22:08:02.576: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 22:08:02.576: INFO: validating pod update-demo-nautilus-czm6d May 8 22:08:02.580: INFO: got data: { "image": "nautilus.jpg" } May 8 22:08:02.580: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 22:08:02.580: INFO: update-demo-nautilus-czm6d is verified up and running May 8 22:08:02.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zdlgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9859' May 8 22:08:02.675: INFO: stderr: "" May 8 22:08:02.675: INFO: stdout: "true" May 8 22:08:02.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zdlgr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9859' May 8 22:08:02.763: INFO: stderr: "" May 8 22:08:02.763: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 22:08:02.763: INFO: validating pod update-demo-nautilus-zdlgr May 8 22:08:02.767: INFO: got data: { "image": "nautilus.jpg" } May 8 22:08:02.767: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 22:08:02.767: INFO: update-demo-nautilus-zdlgr is verified up and running STEP: using delete to clean up resources May 8 22:08:02.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9859' May 8 22:08:02.902: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 22:08:02.903: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 8 22:08:02.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9859' May 8 22:08:03.031: INFO: stderr: "No resources found in kubectl-9859 namespace.\n" May 8 22:08:03.031: INFO: stdout: "" May 8 22:08:03.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9859 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 8 22:08:03.176: INFO: stderr: "" May 8 22:08:03.176: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:08:03.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9859" for this suite. • [SLOW TEST:6.506 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":198,"skipped":3138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:08:03.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 8 22:08:04.845: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 8 22:08:06.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572484, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572484, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572484, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572484, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 22:08:09.935: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:08:09.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:08:11.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8391" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:8.134 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":199,"skipped":3184,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:08:11.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 8 22:08:18.672: INFO: 0 pods remaining May 8 22:08:18.672: INFO: 0 pods has nil DeletionTimestamp May 8 22:08:18.672: INFO: STEP: Gathering metrics W0508 22:08:20.116820 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 22:08:20.116: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:08:20.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2898" for this suite. • [SLOW TEST:9.067 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":200,"skipped":3245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:08:20.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 8 22:08:28.194: INFO: Successfully updated pod "annotationupdate553f8d2d-fc06-4878-8821-e75348dee709" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:08:30.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9643" for this suite. • [SLOW TEST:9.847 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3284,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:08:30.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:08:34.398: INFO: Waiting up to 5m0s for pod "client-envvars-565fe99f-5131-4ce4-8109-6397d8b19fd2" in namespace "pods-1921" to be "success or failure" May 8 22:08:34.413: INFO: Pod "client-envvars-565fe99f-5131-4ce4-8109-6397d8b19fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.343387ms May 8 22:08:36.417: INFO: Pod "client-envvars-565fe99f-5131-4ce4-8109-6397d8b19fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019149206s May 8 22:08:39.109: INFO: Pod "client-envvars-565fe99f-5131-4ce4-8109-6397d8b19fd2": Phase="Running", Reason="", readiness=true. Elapsed: 4.711102398s May 8 22:08:41.112: INFO: Pod "client-envvars-565fe99f-5131-4ce4-8109-6397d8b19fd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.714332059s STEP: Saw pod success May 8 22:08:41.112: INFO: Pod "client-envvars-565fe99f-5131-4ce4-8109-6397d8b19fd2" satisfied condition "success or failure" May 8 22:08:41.115: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-565fe99f-5131-4ce4-8109-6397d8b19fd2 container env3cont: STEP: delete the pod May 8 22:08:41.181: INFO: Waiting for pod client-envvars-565fe99f-5131-4ce4-8109-6397d8b19fd2 to disappear May 8 22:08:41.211: INFO: Pod client-envvars-565fe99f-5131-4ce4-8109-6397d8b19fd2 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:08:41.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1921" for this suite. • [SLOW TEST:10.968 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3299,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:08:41.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 8 22:08:41.365: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 8 22:08:52.900: INFO: >>> kubeConfig: /root/.kube/config May 8 22:08:54.802: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:09:05.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4760" for this suite. • [SLOW TEST:24.136 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":203,"skipped":3300,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:09:05.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 8 22:09:05.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-367' May 8 22:09:05.632: INFO: stderr: "" May 8 22:09:05.632: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 8 22:09:10.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-367 -o json' May 8 22:09:11.326: INFO: stderr: "" May 8 22:09:11.326: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-08T22:09:05Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-367\",\n \"resourceVersion\": \"14548186\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-367/pods/e2e-test-httpd-pod\",\n \"uid\": \"198c05bc-e7a8-48c8-bb30-c1d313d7e78a\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-25f99\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-25f99\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-25f99\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T22:09:05Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T22:09:09Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T22:09:09Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T22:09:05Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://54c3e73566c2ed2d7ad7482e364883b82d087096a00700f061d4ab9ace6382fe\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-08T22:09:08Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.19\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.19\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-08T22:09:05Z\"\n }\n}\n" STEP: replace the image in the pod May 8 22:09:11.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-367' May 8 22:09:11.630: INFO: stderr: "" May 8 22:09:11.630: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 8 22:09:11.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-367' May 8 22:09:19.497: INFO: stderr: "" May 8 22:09:19.497: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:09:19.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-367" for this suite. • [SLOW TEST:14.154 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":204,"skipped":3343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:09:19.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 22:09:20.170: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 22:09:22.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572560, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572560, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572560, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572560, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 22:09:25.223: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:09:25.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5188-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:09:26.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4139" for this suite. STEP: Destroying namespace "webhook-4139-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.137 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":205,"skipped":3376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:09:26.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-5fn6 STEP: Creating a pod to test atomic-volume-subpath May 8 22:09:26.735: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5fn6" in namespace "subpath-5996" to be "success or failure" May 8 22:09:26.739: INFO: Pod "pod-subpath-test-downwardapi-5fn6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.666026ms May 8 22:09:28.744: INFO: Pod "pod-subpath-test-downwardapi-5fn6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008031517s May 8 22:09:30.751: INFO: Pod "pod-subpath-test-downwardapi-5fn6": Phase="Running", Reason="", readiness=true. Elapsed: 4.01593915s May 8 22:09:32.756: INFO: Pod "pod-subpath-test-downwardapi-5fn6": Phase="Running", Reason="", readiness=true. Elapsed: 6.020661607s May 8 22:09:34.760: INFO: Pod "pod-subpath-test-downwardapi-5fn6": Phase="Running", Reason="", readiness=true. Elapsed: 8.024826875s May 8 22:09:36.765: INFO: Pod "pod-subpath-test-downwardapi-5fn6": Phase="Running", Reason="", readiness=true. Elapsed: 10.029075096s May 8 22:09:38.769: INFO: Pod "pod-subpath-test-downwardapi-5fn6": Phase="Running", Reason="", readiness=true. Elapsed: 12.033528851s May 8 22:09:40.772: INFO: Pod "pod-subpath-test-downwardapi-5fn6": Phase="Running", Reason="", readiness=true. Elapsed: 14.036904186s May 8 22:09:42.777: INFO: Pod "pod-subpath-test-downwardapi-5fn6": Phase="Running", Reason="", readiness=true. Elapsed: 16.041733656s May 8 22:09:44.781: INFO: Pod "pod-subpath-test-downwardapi-5fn6": Phase="Running", Reason="", readiness=true. Elapsed: 18.045650316s May 8 22:09:46.786: INFO: Pod "pod-subpath-test-downwardapi-5fn6": Phase="Running", Reason="", readiness=true. Elapsed: 20.050136729s May 8 22:09:48.790: INFO: Pod "pod-subpath-test-downwardapi-5fn6": Phase="Running", Reason="", readiness=true. Elapsed: 22.054442246s May 8 22:09:50.794: INFO: Pod "pod-subpath-test-downwardapi-5fn6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.058185829s STEP: Saw pod success May 8 22:09:50.794: INFO: Pod "pod-subpath-test-downwardapi-5fn6" satisfied condition "success or failure" May 8 22:09:50.796: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-5fn6 container test-container-subpath-downwardapi-5fn6: STEP: delete the pod May 8 22:09:50.831: INFO: Waiting for pod pod-subpath-test-downwardapi-5fn6 to disappear May 8 22:09:50.835: INFO: Pod pod-subpath-test-downwardapi-5fn6 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-5fn6 May 8 22:09:50.835: INFO: Deleting pod "pod-subpath-test-downwardapi-5fn6" in namespace "subpath-5996" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:09:50.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5996" for this suite. • [SLOW TEST:24.195 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":206,"skipped":3411,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:09:50.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 8 22:09:50.950: INFO: Waiting up to 5m0s for pod "downward-api-a45b91ac-90fe-4feb-9a21-3aa2e13e99cb" in namespace "downward-api-9600" to be "success or failure" May 8 22:09:51.014: INFO: Pod "downward-api-a45b91ac-90fe-4feb-9a21-3aa2e13e99cb": Phase="Pending", Reason="", readiness=false. Elapsed: 63.459944ms May 8 22:09:53.128: INFO: Pod "downward-api-a45b91ac-90fe-4feb-9a21-3aa2e13e99cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17742042s May 8 22:09:55.132: INFO: Pod "downward-api-a45b91ac-90fe-4feb-9a21-3aa2e13e99cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.181755367s STEP: Saw pod success May 8 22:09:55.132: INFO: Pod "downward-api-a45b91ac-90fe-4feb-9a21-3aa2e13e99cb" satisfied condition "success or failure" May 8 22:09:55.136: INFO: Trying to get logs from node jerma-worker2 pod downward-api-a45b91ac-90fe-4feb-9a21-3aa2e13e99cb container dapi-container: STEP: delete the pod May 8 22:09:55.160: INFO: Waiting for pod downward-api-a45b91ac-90fe-4feb-9a21-3aa2e13e99cb to disappear May 8 22:09:55.165: INFO: Pod downward-api-a45b91ac-90fe-4feb-9a21-3aa2e13e99cb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:09:55.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9600" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3427,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:09:55.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 8 22:09:55.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7962' May 8 22:09:55.954: INFO: stderr: "" May 8 22:09:55.954: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 8 22:09:56.958: INFO: Selector matched 1 pods for map[app:agnhost] May 8 22:09:56.958: INFO: Found 0 / 1 May 8 22:09:57.998: INFO: Selector matched 1 pods for map[app:agnhost] May 8 22:09:57.998: INFO: Found 0 / 1 May 8 22:09:58.959: INFO: Selector matched 1 pods for map[app:agnhost] May 8 22:09:58.959: INFO: Found 0 / 1 May 8 22:09:59.959: INFO: Selector matched 1 pods for map[app:agnhost] May 8 22:09:59.959: INFO: Found 1 / 1 May 8 22:09:59.959: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 8 22:09:59.962: INFO: Selector matched 1 pods for map[app:agnhost] May 8 22:09:59.962: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 8 22:09:59.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-2fffv --namespace=kubectl-7962 -p {"metadata":{"annotations":{"x":"y"}}}' May 8 22:10:00.063: INFO: stderr: "" May 8 22:10:00.063: INFO: stdout: "pod/agnhost-master-2fffv patched\n" STEP: checking annotations May 8 22:10:00.069: INFO: Selector matched 1 pods for map[app:agnhost] May 8 22:10:00.069: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:10:00.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7962" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":208,"skipped":3458,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:10:00.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-p29g STEP: Creating a pod to test atomic-volume-subpath May 8 22:10:00.185: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-p29g" in namespace "subpath-7262" to be "success or failure" May 8 22:10:00.189: INFO: Pod "pod-subpath-test-secret-p29g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.391707ms May 8 22:10:02.193: INFO: Pod "pod-subpath-test-secret-p29g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008377677s May 8 22:10:04.198: INFO: Pod "pod-subpath-test-secret-p29g": Phase="Running", Reason="", readiness=true. Elapsed: 4.012992077s May 8 22:10:06.202: INFO: Pod "pod-subpath-test-secret-p29g": Phase="Running", Reason="", readiness=true. Elapsed: 6.017178456s May 8 22:10:08.207: INFO: Pod "pod-subpath-test-secret-p29g": Phase="Running", Reason="", readiness=true. Elapsed: 8.021648753s May 8 22:10:10.211: INFO: Pod "pod-subpath-test-secret-p29g": Phase="Running", Reason="", readiness=true. Elapsed: 10.025891254s May 8 22:10:12.215: INFO: Pod "pod-subpath-test-secret-p29g": Phase="Running", Reason="", readiness=true. Elapsed: 12.029834833s May 8 22:10:14.219: INFO: Pod "pod-subpath-test-secret-p29g": Phase="Running", Reason="", readiness=true. Elapsed: 14.034400437s May 8 22:10:16.224: INFO: Pod "pod-subpath-test-secret-p29g": Phase="Running", Reason="", readiness=true. Elapsed: 16.038887599s May 8 22:10:18.228: INFO: Pod "pod-subpath-test-secret-p29g": Phase="Running", Reason="", readiness=true. Elapsed: 18.042801342s May 8 22:10:20.232: INFO: Pod "pod-subpath-test-secret-p29g": Phase="Running", Reason="", readiness=true. Elapsed: 20.046966525s May 8 22:10:22.236: INFO: Pod "pod-subpath-test-secret-p29g": Phase="Running", Reason="", readiness=true. Elapsed: 22.051272859s May 8 22:10:24.241: INFO: Pod "pod-subpath-test-secret-p29g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055879538s STEP: Saw pod success May 8 22:10:24.241: INFO: Pod "pod-subpath-test-secret-p29g" satisfied condition "success or failure" May 8 22:10:24.243: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-p29g container test-container-subpath-secret-p29g: STEP: delete the pod May 8 22:10:24.280: INFO: Waiting for pod pod-subpath-test-secret-p29g to disappear May 8 22:10:24.285: INFO: Pod pod-subpath-test-secret-p29g no longer exists STEP: Deleting pod pod-subpath-test-secret-p29g May 8 22:10:24.285: INFO: Deleting pod "pod-subpath-test-secret-p29g" in namespace "subpath-7262" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:10:24.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7262" for this suite. • [SLOW TEST:24.219 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":209,"skipped":3479,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:10:24.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-f0947516-3ebf-4068-af5a-1707b894439c STEP: Creating configMap with name cm-test-opt-upd-9e5f8b3a-a3d9-4267-b3c7-20fbc013ac8f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f0947516-3ebf-4068-af5a-1707b894439c STEP: Updating configmap cm-test-opt-upd-9e5f8b3a-a3d9-4267-b3c7-20fbc013ac8f STEP: Creating configMap with name cm-test-opt-create-3354371e-c456-40fa-be2e-1f3f0bf37f20 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:10:32.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4314" for this suite. • [SLOW TEST:8.192 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:10:32.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:10:32.594: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 8 22:10:35.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9616 create -f -' May 8 22:10:40.127: INFO: stderr: "" May 8 22:10:40.127: INFO: stdout: "e2e-test-crd-publish-openapi-4784-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 8 22:10:40.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9616 delete e2e-test-crd-publish-openapi-4784-crds test-cr' May 8 22:10:40.236: INFO: stderr: "" May 8 22:10:40.236: INFO: stdout: "e2e-test-crd-publish-openapi-4784-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 8 22:10:40.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9616 apply -f -' May 8 22:10:40.676: INFO: stderr: "" May 8 22:10:40.676: INFO: stdout: "e2e-test-crd-publish-openapi-4784-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 8 22:10:40.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9616 delete e2e-test-crd-publish-openapi-4784-crds test-cr' May 8 22:10:40.818: INFO: stderr: "" May 8 22:10:40.818: INFO: stdout: "e2e-test-crd-publish-openapi-4784-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 8 22:10:40.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4784-crds' May 8 22:10:41.103: INFO: stderr: "" May 8 22:10:41.103: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4784-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:10:42.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9616" for this suite. • [SLOW TEST:10.518 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":211,"skipped":3599,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:10:43.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:10:43.159: INFO: Waiting up to 5m0s for pod "busybox-user-65534-bc936f2e-ee76-4fd2-b40a-af58f7a502ac" in namespace "security-context-test-6221" to be "success or failure" May 8 22:10:43.162: INFO: Pod "busybox-user-65534-bc936f2e-ee76-4fd2-b40a-af58f7a502ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.484706ms May 8 22:10:45.178: INFO: Pod "busybox-user-65534-bc936f2e-ee76-4fd2-b40a-af58f7a502ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019349417s May 8 22:10:47.183: INFO: Pod "busybox-user-65534-bc936f2e-ee76-4fd2-b40a-af58f7a502ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023622998s May 8 22:10:47.183: INFO: Pod "busybox-user-65534-bc936f2e-ee76-4fd2-b40a-af58f7a502ac" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:10:47.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6221" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3607,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:10:47.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 8 22:10:51.358: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:10:51.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9406" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3627,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:10:51.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 8 22:10:51.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6167' May 8 22:10:51.585: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 8 22:10:51.585: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 8 22:10:51.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-6167' May 8 22:10:51.703: INFO: stderr: "" May 8 22:10:51.703: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:10:51.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6167" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":214,"skipped":3685,"failed":0} ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:10:51.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 8 22:10:51.764: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 8 22:10:52.248: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 8 22:10:54.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572652, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572652, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572652, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572652, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 22:10:56.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572652, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572652, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572652, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572652, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 22:10:59.584: INFO: Waited 627.785985ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:11:00.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7848" for this suite. • [SLOW TEST:8.415 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":215,"skipped":3685,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:11:00.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-2bln STEP: Creating a pod to test atomic-volume-subpath May 8 22:11:00.607: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2bln" in namespace "subpath-4692" to be "success or failure" May 8 22:11:00.770: INFO: Pod "pod-subpath-test-configmap-2bln": Phase="Pending", Reason="", readiness=false. Elapsed: 163.4288ms May 8 22:11:02.774: INFO: Pod "pod-subpath-test-configmap-2bln": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167689339s May 8 22:11:04.778: INFO: Pod "pod-subpath-test-configmap-2bln": Phase="Running", Reason="", readiness=true. Elapsed: 4.171912193s May 8 22:11:06.783: INFO: Pod "pod-subpath-test-configmap-2bln": Phase="Running", Reason="", readiness=true. Elapsed: 6.176499525s May 8 22:11:08.788: INFO: Pod "pod-subpath-test-configmap-2bln": Phase="Running", Reason="", readiness=true. Elapsed: 8.181069301s May 8 22:11:10.791: INFO: Pod "pod-subpath-test-configmap-2bln": Phase="Running", Reason="", readiness=true. Elapsed: 10.184605457s May 8 22:11:12.795: INFO: Pod "pod-subpath-test-configmap-2bln": Phase="Running", Reason="", readiness=true. Elapsed: 12.188608659s May 8 22:11:14.799: INFO: Pod "pod-subpath-test-configmap-2bln": Phase="Running", Reason="", readiness=true. Elapsed: 14.192760571s May 8 22:11:16.803: INFO: Pod "pod-subpath-test-configmap-2bln": Phase="Running", Reason="", readiness=true. Elapsed: 16.196478906s May 8 22:11:18.807: INFO: Pod "pod-subpath-test-configmap-2bln": Phase="Running", Reason="", readiness=true. Elapsed: 18.200535398s May 8 22:11:20.811: INFO: Pod "pod-subpath-test-configmap-2bln": Phase="Running", Reason="", readiness=true. Elapsed: 20.204204494s May 8 22:11:22.815: INFO: Pod "pod-subpath-test-configmap-2bln": Phase="Running", Reason="", readiness=true. Elapsed: 22.208118407s May 8 22:11:24.819: INFO: Pod "pod-subpath-test-configmap-2bln": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.212156779s STEP: Saw pod success May 8 22:11:24.819: INFO: Pod "pod-subpath-test-configmap-2bln" satisfied condition "success or failure" May 8 22:11:24.822: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-2bln container test-container-subpath-configmap-2bln: STEP: delete the pod May 8 22:11:24.890: INFO: Waiting for pod pod-subpath-test-configmap-2bln to disappear May 8 22:11:24.904: INFO: Pod pod-subpath-test-configmap-2bln no longer exists STEP: Deleting pod pod-subpath-test-configmap-2bln May 8 22:11:24.904: INFO: Deleting pod "pod-subpath-test-configmap-2bln" in namespace "subpath-4692" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:11:24.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4692" for this suite. • [SLOW TEST:24.785 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":216,"skipped":3704,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:11:24.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5367 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 8 22:11:25.066: INFO: Found 0 stateful pods, waiting for 3 May 8 22:11:35.071: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 8 22:11:35.071: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 8 22:11:35.071: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 8 22:11:35.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5367 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 8 22:11:35.331: INFO: stderr: "I0508 22:11:35.219550 3638 log.go:172] (0xc0003c0210) (0xc00066bcc0) Create stream\nI0508 22:11:35.219680 3638 log.go:172] (0xc0003c0210) (0xc00066bcc0) Stream added, broadcasting: 1\nI0508 22:11:35.222288 3638 log.go:172] (0xc0003c0210) Reply frame received for 1\nI0508 22:11:35.222326 3638 log.go:172] (0xc0003c0210) (0xc000640640) Create stream\nI0508 22:11:35.222336 3638 log.go:172] (0xc0003c0210) (0xc000640640) Stream added, broadcasting: 3\nI0508 22:11:35.223132 3638 log.go:172] (0xc0003c0210) Reply frame received for 3\nI0508 22:11:35.223157 3638 log.go:172] (0xc0003c0210) (0xc00066bd60) Create stream\nI0508 22:11:35.223164 3638 log.go:172] (0xc0003c0210) (0xc00066bd60) Stream added, broadcasting: 5\nI0508 22:11:35.223934 3638 log.go:172] (0xc0003c0210) Reply frame received for 5\nI0508 22:11:35.277531 3638 log.go:172] (0xc0003c0210) Data frame received for 5\nI0508 22:11:35.277581 3638 log.go:172] (0xc00066bd60) (5) Data frame handling\nI0508 22:11:35.277617 3638 log.go:172] (0xc00066bd60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 22:11:35.322476 3638 log.go:172] (0xc0003c0210) Data frame received for 3\nI0508 22:11:35.322512 3638 log.go:172] (0xc000640640) (3) Data frame handling\nI0508 22:11:35.322530 3638 log.go:172] (0xc000640640) (3) Data frame sent\nI0508 22:11:35.322641 3638 log.go:172] (0xc0003c0210) Data frame received for 3\nI0508 22:11:35.322672 3638 log.go:172] (0xc000640640) (3) Data frame handling\nI0508 22:11:35.322770 3638 log.go:172] (0xc0003c0210) Data frame received for 5\nI0508 22:11:35.322789 3638 log.go:172] (0xc00066bd60) (5) Data frame handling\nI0508 22:11:35.325092 3638 log.go:172] (0xc0003c0210) Data frame received for 1\nI0508 22:11:35.325326 3638 log.go:172] (0xc00066bcc0) (1) Data frame handling\nI0508 22:11:35.325347 3638 log.go:172] (0xc00066bcc0) (1) Data frame sent\nI0508 22:11:35.325373 3638 log.go:172] (0xc0003c0210) (0xc00066bcc0) Stream removed, broadcasting: 1\nI0508 22:11:35.325439 3638 log.go:172] (0xc0003c0210) Go away received\nI0508 22:11:35.325815 3638 log.go:172] (0xc0003c0210) (0xc00066bcc0) Stream removed, broadcasting: 1\nI0508 22:11:35.325836 3638 log.go:172] (0xc0003c0210) (0xc000640640) Stream removed, broadcasting: 3\nI0508 22:11:35.325855 3638 log.go:172] (0xc0003c0210) (0xc00066bd60) Stream removed, broadcasting: 5\n" May 8 22:11:35.332: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 8 22:11:35.332: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 8 22:11:45.374: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 8 22:11:55.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5367 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 22:11:55.623: INFO: stderr: "I0508 22:11:55.549309 3659 log.go:172] (0xc000ac6000) (0xc000a38000) Create stream\nI0508 22:11:55.549366 3659 log.go:172] (0xc000ac6000) (0xc000a38000) Stream added, broadcasting: 1\nI0508 22:11:55.551649 3659 log.go:172] (0xc000ac6000) Reply frame received for 1\nI0508 22:11:55.551683 3659 log.go:172] (0xc000ac6000) (0xc000936000) Create stream\nI0508 22:11:55.551693 3659 log.go:172] (0xc000ac6000) (0xc000936000) Stream added, broadcasting: 3\nI0508 22:11:55.552972 3659 log.go:172] (0xc000ac6000) Reply frame received for 3\nI0508 22:11:55.553038 3659 log.go:172] (0xc000ac6000) (0xc0009360a0) Create stream\nI0508 22:11:55.553059 3659 log.go:172] (0xc000ac6000) (0xc0009360a0) Stream added, broadcasting: 5\nI0508 22:11:55.554294 3659 log.go:172] (0xc000ac6000) Reply frame received for 5\nI0508 22:11:55.615871 3659 log.go:172] (0xc000ac6000) Data frame received for 3\nI0508 22:11:55.615902 3659 log.go:172] (0xc000936000) (3) Data frame handling\nI0508 22:11:55.615917 3659 log.go:172] (0xc000936000) (3) Data frame sent\nI0508 22:11:55.615928 3659 log.go:172] (0xc000ac6000) Data frame received for 3\nI0508 22:11:55.615937 3659 log.go:172] (0xc000936000) (3) Data frame handling\nI0508 22:11:55.615968 3659 log.go:172] (0xc000ac6000) Data frame received for 5\nI0508 22:11:55.615998 3659 log.go:172] (0xc0009360a0) (5) Data frame handling\nI0508 22:11:55.616012 3659 log.go:172] (0xc0009360a0) (5) Data frame sent\nI0508 22:11:55.616025 3659 log.go:172] (0xc000ac6000) Data frame received for 5\nI0508 22:11:55.616036 3659 log.go:172] (0xc0009360a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0508 22:11:55.617910 3659 log.go:172] (0xc000ac6000) Data frame received for 1\nI0508 22:11:55.617934 3659 log.go:172] (0xc000a38000) (1) Data frame handling\nI0508 22:11:55.617947 3659 log.go:172] (0xc000a38000) (1) Data frame sent\nI0508 22:11:55.617958 3659 log.go:172] (0xc000ac6000) (0xc000a38000) Stream removed, broadcasting: 1\nI0508 22:11:55.617968 3659 log.go:172] (0xc000ac6000) Go away received\nI0508 22:11:55.618419 3659 log.go:172] (0xc000ac6000) (0xc000a38000) Stream removed, broadcasting: 1\nI0508 22:11:55.618443 3659 log.go:172] (0xc000ac6000) (0xc000936000) Stream removed, broadcasting: 3\nI0508 22:11:55.618453 3659 log.go:172] (0xc000ac6000) (0xc0009360a0) Stream removed, broadcasting: 5\n" May 8 22:11:55.623: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 8 22:11:55.623: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 8 22:12:15.658: INFO: Waiting for StatefulSet statefulset-5367/ss2 to complete update STEP: Rolling back to a previous revision May 8 22:12:25.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5367 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 8 22:12:25.918: INFO: stderr: "I0508 22:12:25.808124 3680 log.go:172] (0xc00012efd0) (0xc0006b59a0) Create stream\nI0508 22:12:25.808202 3680 log.go:172] (0xc00012efd0) (0xc0006b59a0) Stream added, broadcasting: 1\nI0508 22:12:25.810958 3680 log.go:172] (0xc00012efd0) Reply frame received for 1\nI0508 22:12:25.811012 3680 log.go:172] (0xc00012efd0) (0xc0007aa000) Create stream\nI0508 22:12:25.811034 3680 log.go:172] (0xc00012efd0) (0xc0007aa000) Stream added, broadcasting: 3\nI0508 22:12:25.812050 3680 log.go:172] (0xc00012efd0) Reply frame received for 3\nI0508 22:12:25.812107 3680 log.go:172] (0xc00012efd0) (0xc0003a4000) Create stream\nI0508 22:12:25.812130 3680 log.go:172] (0xc00012efd0) (0xc0003a4000) Stream added, broadcasting: 5\nI0508 22:12:25.813080 3680 log.go:172] (0xc00012efd0) Reply frame received for 5\nI0508 22:12:25.878655 3680 log.go:172] (0xc00012efd0) Data frame received for 5\nI0508 22:12:25.878682 3680 log.go:172] (0xc0003a4000) (5) Data frame handling\nI0508 22:12:25.878700 3680 log.go:172] (0xc0003a4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 22:12:25.910754 3680 log.go:172] (0xc00012efd0) Data frame received for 3\nI0508 22:12:25.910887 3680 log.go:172] (0xc0007aa000) (3) Data frame handling\nI0508 22:12:25.910926 3680 log.go:172] (0xc0007aa000) (3) Data frame sent\nI0508 22:12:25.910948 3680 log.go:172] (0xc00012efd0) Data frame received for 3\nI0508 22:12:25.910983 3680 log.go:172] (0xc0007aa000) (3) Data frame handling\nI0508 22:12:25.911128 3680 log.go:172] (0xc00012efd0) Data frame received for 5\nI0508 22:12:25.911179 3680 log.go:172] (0xc0003a4000) (5) Data frame handling\nI0508 22:12:25.912750 3680 log.go:172] (0xc00012efd0) Data frame received for 1\nI0508 22:12:25.912788 3680 log.go:172] (0xc0006b59a0) (1) Data frame handling\nI0508 22:12:25.912821 3680 log.go:172] (0xc0006b59a0) (1) Data frame sent\nI0508 22:12:25.912849 3680 log.go:172] (0xc00012efd0) (0xc0006b59a0) Stream removed, broadcasting: 1\nI0508 22:12:25.912875 3680 log.go:172] (0xc00012efd0) Go away received\nI0508 22:12:25.913463 3680 log.go:172] (0xc00012efd0) (0xc0006b59a0) Stream removed, broadcasting: 1\nI0508 22:12:25.913488 3680 log.go:172] (0xc00012efd0) (0xc0007aa000) Stream removed, broadcasting: 3\nI0508 22:12:25.913511 3680 log.go:172] (0xc00012efd0) (0xc0003a4000) Stream removed, broadcasting: 5\n" May 8 22:12:25.918: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 8 22:12:25.918: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 8 22:12:35.951: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 8 22:12:46.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5367 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 22:12:46.286: INFO: stderr: "I0508 22:12:46.191265 3700 log.go:172] (0xc000936630) (0xc0009f4000) Create stream\nI0508 22:12:46.191316 3700 log.go:172] (0xc000936630) (0xc0009f4000) Stream added, broadcasting: 1\nI0508 22:12:46.193926 3700 log.go:172] (0xc000936630) Reply frame received for 1\nI0508 22:12:46.193970 3700 log.go:172] (0xc000936630) (0xc000a60000) Create stream\nI0508 22:12:46.193985 3700 log.go:172] (0xc000936630) (0xc000a60000) Stream added, broadcasting: 3\nI0508 22:12:46.195006 3700 log.go:172] (0xc000936630) Reply frame received for 3\nI0508 22:12:46.195037 3700 log.go:172] (0xc000936630) (0xc000a600a0) Create stream\nI0508 22:12:46.195049 3700 log.go:172] (0xc000936630) (0xc000a600a0) Stream added, broadcasting: 5\nI0508 22:12:46.195937 3700 log.go:172] (0xc000936630) Reply frame received for 5\nI0508 22:12:46.280685 3700 log.go:172] (0xc000936630) Data frame received for 5\nI0508 22:12:46.280740 3700 log.go:172] (0xc000a600a0) (5) Data frame handling\nI0508 22:12:46.280757 3700 log.go:172] (0xc000a600a0) (5) Data frame sent\nI0508 22:12:46.280766 3700 log.go:172] (0xc000936630) Data frame received for 5\nI0508 22:12:46.280776 3700 log.go:172] (0xc000a600a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0508 22:12:46.280796 3700 log.go:172] (0xc000936630) Data frame received for 3\nI0508 22:12:46.280807 3700 log.go:172] (0xc000a60000) (3) Data frame handling\nI0508 22:12:46.280823 3700 log.go:172] (0xc000a60000) (3) Data frame sent\nI0508 22:12:46.280829 3700 log.go:172] (0xc000936630) Data frame received for 3\nI0508 22:12:46.280838 3700 log.go:172] (0xc000a60000) (3) Data frame handling\nI0508 22:12:46.282039 3700 log.go:172] (0xc000936630) Data frame received for 1\nI0508 22:12:46.282056 3700 log.go:172] (0xc0009f4000) (1) Data frame handling\nI0508 22:12:46.282068 3700 log.go:172] (0xc0009f4000) (1) Data frame sent\nI0508 22:12:46.282147 3700 log.go:172] (0xc000936630) (0xc0009f4000) Stream removed, broadcasting: 1\nI0508 22:12:46.282463 3700 log.go:172] (0xc000936630) (0xc0009f4000) Stream removed, broadcasting: 1\nI0508 22:12:46.282482 3700 log.go:172] (0xc000936630) (0xc000a60000) Stream removed, broadcasting: 3\nI0508 22:12:46.282491 3700 log.go:172] (0xc000936630) (0xc000a600a0) Stream removed, broadcasting: 5\n" May 8 22:12:46.287: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 8 22:12:46.287: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 8 22:13:06.303: INFO: Waiting for StatefulSet statefulset-5367/ss2 to complete update May 8 22:13:06.303: INFO: Waiting for Pod statefulset-5367/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 8 22:13:16.311: INFO: Deleting all statefulset in ns statefulset-5367 May 8 22:13:16.314: INFO: Scaling statefulset ss2 to 0 May 8 22:13:36.338: INFO: Waiting for statefulset status.replicas updated to 0 May 8 22:13:36.340: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:13:36.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5367" for this suite. • [SLOW TEST:131.508 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":217,"skipped":3708,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:13:36.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 8 22:13:37.045: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 8 22:13:39.056: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572817, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572817, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572817, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572817, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 22:13:42.093: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:13:42.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:13:42.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4011" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.516 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":218,"skipped":3711,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:13:42.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 22:13:44.313: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 22:13:46.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572824, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572824, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572824, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572824, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 22:13:48.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572824, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572824, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572824, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572824, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 22:13:51.379: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 8 22:13:51.402: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:13:51.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5459" for this suite. STEP: Destroying namespace "webhook-5459-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.591 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":219,"skipped":3736,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:13:51.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2456 STEP: creating a selector STEP: Creating the service pods in kubernetes May 8 22:13:51.636: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 8 22:14:17.712: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.33:8080/dial?request=hostname&protocol=udp&host=10.244.1.141&port=8081&tries=1'] Namespace:pod-network-test-2456 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 22:14:17.712: INFO: >>> kubeConfig: /root/.kube/config I0508 22:14:17.745958 6 log.go:172] (0xc001732bb0) (0xc001e4c0a0) Create stream I0508 22:14:17.746006 6 log.go:172] (0xc001732bb0) (0xc001e4c0a0) Stream added, broadcasting: 1 I0508 22:14:17.748602 6 log.go:172] (0xc001732bb0) Reply frame received for 1 I0508 22:14:17.748661 6 log.go:172] (0xc001732bb0) (0xc0024fba40) Create stream I0508 22:14:17.748677 6 log.go:172] (0xc001732bb0) (0xc0024fba40) Stream added, broadcasting: 3 I0508 22:14:17.750067 6 log.go:172] (0xc001732bb0) Reply frame received for 3 I0508 22:14:17.750111 6 log.go:172] (0xc001732bb0) (0xc00231b040) Create stream I0508 22:14:17.750146 6 log.go:172] (0xc001732bb0) (0xc00231b040) Stream added, broadcasting: 5 I0508 22:14:17.751308 6 log.go:172] (0xc001732bb0) Reply frame received for 5 I0508 22:14:17.826552 6 log.go:172] (0xc001732bb0) Data frame received for 3 I0508 22:14:17.826583 6 log.go:172] (0xc0024fba40) (3) Data frame handling I0508 22:14:17.826601 6 log.go:172] (0xc0024fba40) (3) Data frame sent I0508 22:14:17.827139 6 log.go:172] (0xc001732bb0) Data frame received for 5 I0508 22:14:17.827185 6 log.go:172] (0xc00231b040) (5) Data frame handling I0508 22:14:17.827225 6 log.go:172] (0xc001732bb0) Data frame received for 3 I0508 22:14:17.827244 6 log.go:172] (0xc0024fba40) (3) Data frame handling I0508 22:14:17.829000 6 log.go:172] (0xc001732bb0) Data frame received for 1 I0508 22:14:17.829025 6 log.go:172] (0xc001e4c0a0) (1) Data frame handling I0508 22:14:17.829073 6 log.go:172] (0xc001e4c0a0) (1) Data frame sent I0508 22:14:17.829303 6 log.go:172] (0xc001732bb0) (0xc001e4c0a0) Stream removed, broadcasting: 1 I0508 22:14:17.829353 6 log.go:172] (0xc001732bb0) Go away received I0508 22:14:17.829488 6 log.go:172] (0xc001732bb0) (0xc001e4c0a0) Stream removed, broadcasting: 1 I0508 22:14:17.829516 6 log.go:172] (0xc001732bb0) (0xc0024fba40) Stream removed, broadcasting: 3 I0508 22:14:17.829527 6 log.go:172] (0xc001732bb0) (0xc00231b040) Stream removed, broadcasting: 5 May 8 22:14:17.829: INFO: Waiting for responses: map[] May 8 22:14:17.837: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.33:8080/dial?request=hostname&protocol=udp&host=10.244.2.32&port=8081&tries=1'] Namespace:pod-network-test-2456 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 22:14:17.837: INFO: >>> kubeConfig: /root/.kube/config I0508 22:14:17.870769 6 log.go:172] (0xc001733130) (0xc001e4c5a0) Create stream I0508 22:14:17.870820 6 log.go:172] (0xc001733130) (0xc001e4c5a0) Stream added, broadcasting: 1 I0508 22:14:17.873007 6 log.go:172] (0xc001733130) Reply frame received for 1 I0508 22:14:17.873046 6 log.go:172] (0xc001733130) (0xc001e4c640) Create stream I0508 22:14:17.873059 6 log.go:172] (0xc001733130) (0xc001e4c640) Stream added, broadcasting: 3 I0508 22:14:17.873946 6 log.go:172] (0xc001733130) Reply frame received for 3 I0508 22:14:17.873978 6 log.go:172] (0xc001733130) (0xc0024fbd60) Create stream I0508 22:14:17.873988 6 log.go:172] (0xc001733130) (0xc0024fbd60) Stream added, broadcasting: 5 I0508 22:14:17.874648 6 log.go:172] (0xc001733130) Reply frame received for 5 I0508 22:14:17.949814 6 log.go:172] (0xc001733130) Data frame received for 3 I0508 22:14:17.949838 6 log.go:172] (0xc001e4c640) (3) Data frame handling I0508 22:14:17.949857 6 log.go:172] (0xc001e4c640) (3) Data frame sent I0508 22:14:17.950287 6 log.go:172] (0xc001733130) Data frame received for 3 I0508 22:14:17.950331 6 log.go:172] (0xc001e4c640) (3) Data frame handling I0508 22:14:17.950385 6 log.go:172] (0xc001733130) Data frame received for 5 I0508 22:14:17.950405 6 log.go:172] (0xc0024fbd60) (5) Data frame handling I0508 22:14:17.952035 6 log.go:172] (0xc001733130) Data frame received for 1 I0508 22:14:17.952057 6 log.go:172] (0xc001e4c5a0) (1) Data frame handling I0508 22:14:17.952082 6 log.go:172] (0xc001e4c5a0) (1) Data frame sent I0508 22:14:17.952109 6 log.go:172] (0xc001733130) (0xc001e4c5a0) Stream removed, broadcasting: 1 I0508 22:14:17.952130 6 log.go:172] (0xc001733130) Go away received I0508 22:14:17.952251 6 log.go:172] (0xc001733130) (0xc001e4c5a0) Stream removed, broadcasting: 1 I0508 22:14:17.952281 6 log.go:172] (0xc001733130) (0xc001e4c640) Stream removed, broadcasting: 3 I0508 22:14:17.952301 6 log.go:172] (0xc001733130) (0xc0024fbd60) Stream removed, broadcasting: 5 May 8 22:14:17.952: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:14:17.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2456" for this suite. • [SLOW TEST:26.432 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3737,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:14:17.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:14:18.006: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:14:24.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2828" for this suite. • [SLOW TEST:7.000 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3753,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:14:24.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 22:14:28.702: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 22:14:30.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572869, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572869, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572870, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572868, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 22:14:33.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572869, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572869, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572870, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572868, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 22:14:34.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572869, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572869, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572870, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572868, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 22:14:38.055: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:14:38.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-814" for this suite. STEP: Destroying namespace "webhook-814-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.550 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":222,"skipped":3757,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:14:39.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 22:14:41.311: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 22:14:43.323: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572881, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572881, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572881, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724572881, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 22:14:46.522: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:14:46.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9735" for this suite. STEP: Destroying namespace "webhook-9735-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.249 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":223,"skipped":3758,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:14:46.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1713 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 8 22:14:46.868: INFO: Found 0 stateful pods, waiting for 3 May 8 22:14:56.920: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 8 22:14:56.920: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 8 22:14:56.920: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 8 22:15:06.874: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 8 22:15:06.874: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 8 22:15:06.874: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 8 22:15:06.939: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 8 22:15:17.004: INFO: Updating stateful set ss2 May 8 22:15:17.038: INFO: Waiting for Pod statefulset-1713/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 8 22:15:27.046: INFO: Waiting for Pod statefulset-1713/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 8 22:15:37.739: INFO: Found 2 stateful pods, waiting for 3 May 8 22:15:47.744: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 8 22:15:47.744: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 8 22:15:47.744: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 8 22:15:47.768: INFO: Updating stateful set ss2 May 8 22:15:47.783: INFO: Waiting for Pod statefulset-1713/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 8 22:15:57.809: INFO: Updating stateful set ss2 May 8 22:15:57.859: INFO: Waiting for StatefulSet statefulset-1713/ss2 to complete update May 8 22:15:57.859: INFO: Waiting for Pod statefulset-1713/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 8 22:16:07.867: INFO: Deleting all statefulset in ns statefulset-1713 May 8 22:16:07.870: INFO: Scaling statefulset ss2 to 0 May 8 22:16:47.903: INFO: Waiting for statefulset status.replicas updated to 0 May 8 22:16:47.907: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:16:47.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1713" for this suite. • [SLOW TEST:121.197 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":224,"skipped":3772,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:16:47.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 8 22:16:48.027: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:16:48.032: INFO: Number of nodes with available pods: 0 May 8 22:16:48.032: INFO: Node jerma-worker is running more than one daemon pod May 8 22:16:49.037: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:16:49.040: INFO: Number of nodes with available pods: 0 May 8 22:16:49.040: INFO: Node jerma-worker is running more than one daemon pod May 8 22:16:50.037: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:16:50.040: INFO: Number of nodes with available pods: 0 May 8 22:16:50.040: INFO: Node jerma-worker is running more than one daemon pod May 8 22:16:51.206: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:16:51.210: INFO: Number of nodes with available pods: 0 May 8 22:16:51.211: INFO: Node jerma-worker is running more than one daemon pod May 8 22:16:52.037: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:16:52.041: INFO: Number of nodes with available pods: 1 May 8 22:16:52.041: INFO: Node jerma-worker is running more than one daemon pod May 8 22:16:53.038: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:16:53.047: INFO: Number of nodes with available pods: 2 May 8 22:16:53.047: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 8 22:16:53.132: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:16:53.182: INFO: Number of nodes with available pods: 1 May 8 22:16:53.182: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:16:54.314: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:16:54.317: INFO: Number of nodes with available pods: 1 May 8 22:16:54.317: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:16:55.187: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:16:55.191: INFO: Number of nodes with available pods: 1 May 8 22:16:55.191: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:16:56.190: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:16:56.194: INFO: Number of nodes with available pods: 1 May 8 22:16:56.194: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:16:57.187: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:16:57.190: INFO: Number of nodes with available pods: 1 May 8 22:16:57.190: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:16:58.188: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:16:58.191: INFO: Number of nodes with available pods: 1 May 8 22:16:58.191: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:16:59.188: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:16:59.191: INFO: Number of nodes with available pods: 1 May 8 22:16:59.191: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:17:00.187: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:17:00.190: INFO: Number of nodes with available pods: 1 May 8 22:17:00.190: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:17:01.188: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:17:01.191: INFO: Number of nodes with available pods: 1 May 8 22:17:01.191: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:17:02.187: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:17:02.191: INFO: Number of nodes with available pods: 1 May 8 22:17:02.191: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:17:03.186: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:17:03.189: INFO: Number of nodes with available pods: 2 May 8 22:17:03.189: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8794, will wait for the garbage collector to delete the pods May 8 22:17:03.248: INFO: Deleting DaemonSet.extensions daemon-set took: 5.921327ms May 8 22:17:03.549: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.420766ms May 8 22:17:09.552: INFO: Number of nodes with available pods: 0 May 8 22:17:09.552: INFO: Number of running nodes: 0, number of available pods: 0 May 8 22:17:09.555: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8794/daemonsets","resourceVersion":"14551056"},"items":null} May 8 22:17:09.558: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8794/pods","resourceVersion":"14551056"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:17:09.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8794" for this suite. • [SLOW TEST:21.626 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":225,"skipped":3787,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:17:09.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-b0416134-3dfa-48c3-928a-7a59a76078f9 STEP: Creating a pod to test consume secrets May 8 22:17:09.652: INFO: Waiting up to 5m0s for pod "pod-secrets-635572c9-b19d-49f2-9deb-a96a7e08643e" in namespace "secrets-2030" to be "success or failure" May 8 22:17:09.656: INFO: Pod "pod-secrets-635572c9-b19d-49f2-9deb-a96a7e08643e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.720338ms May 8 22:17:11.699: INFO: Pod "pod-secrets-635572c9-b19d-49f2-9deb-a96a7e08643e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046516197s May 8 22:17:13.703: INFO: Pod "pod-secrets-635572c9-b19d-49f2-9deb-a96a7e08643e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051080743s STEP: Saw pod success May 8 22:17:13.703: INFO: Pod "pod-secrets-635572c9-b19d-49f2-9deb-a96a7e08643e" satisfied condition "success or failure" May 8 22:17:13.707: INFO: Trying to get logs from node jerma-worker pod pod-secrets-635572c9-b19d-49f2-9deb-a96a7e08643e container secret-volume-test: STEP: delete the pod May 8 22:17:13.776: INFO: Waiting for pod pod-secrets-635572c9-b19d-49f2-9deb-a96a7e08643e to disappear May 8 22:17:13.790: INFO: Pod pod-secrets-635572c9-b19d-49f2-9deb-a96a7e08643e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:17:13.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2030" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3801,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:17:13.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:17:13.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9081" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":227,"skipped":3807,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:17:13.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-f4066cd2-ba5c-4cac-a422-f6f288ef27d1 STEP: Creating configMap with name cm-test-opt-upd-f20ccc1a-8e9e-44b6-b23c-cfcea68d5866 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f4066cd2-ba5c-4cac-a422-f6f288ef27d1 STEP: Updating configmap cm-test-opt-upd-f20ccc1a-8e9e-44b6-b23c-cfcea68d5866 STEP: Creating configMap with name cm-test-opt-create-e7b05e5f-9cad-4c92-8568-12423221b4e6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:18:36.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2407" for this suite. • [SLOW TEST:82.954 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3813,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:18:36.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 8 22:18:36.924: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 8 22:18:36.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5483' May 8 22:18:37.936: INFO: stderr: "" May 8 22:18:37.936: INFO: stdout: "service/agnhost-slave created\n" May 8 22:18:37.936: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 8 22:18:37.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5483' May 8 22:18:38.271: INFO: stderr: "" May 8 22:18:38.271: INFO: stdout: "service/agnhost-master created\n" May 8 22:18:38.271: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 8 22:18:38.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5483' May 8 22:18:38.564: INFO: stderr: "" May 8 22:18:38.564: INFO: stdout: "service/frontend created\n" May 8 22:18:38.564: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 8 22:18:38.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5483' May 8 22:18:38.817: INFO: stderr: "" May 8 22:18:38.817: INFO: stdout: "deployment.apps/frontend created\n" May 8 22:18:38.818: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 8 22:18:38.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5483' May 8 22:18:39.140: INFO: stderr: "" May 8 22:18:39.140: INFO: stdout: "deployment.apps/agnhost-master created\n" May 8 22:18:39.140: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 8 22:18:39.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5483' May 8 22:18:39.410: INFO: stderr: "" May 8 22:18:39.410: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 8 22:18:39.410: INFO: Waiting for all frontend pods to be Running. May 8 22:18:49.461: INFO: Waiting for frontend to serve content. May 8 22:18:49.496: INFO: Trying to add a new entry to the guestbook. May 8 22:18:49.530: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 8 22:18:49.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5483' May 8 22:18:49.703: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 22:18:49.703: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 8 22:18:49.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5483' May 8 22:18:49.871: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 22:18:49.871: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 8 22:18:49.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5483' May 8 22:18:50.035: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 22:18:50.035: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 8 22:18:50.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5483' May 8 22:18:50.137: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 22:18:50.137: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 8 22:18:50.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5483' May 8 22:18:50.231: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 22:18:50.231: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 8 22:18:50.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5483' May 8 22:18:50.353: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 22:18:50.353: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:18:50.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5483" for this suite. • [SLOW TEST:13.483 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":229,"skipped":3818,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:18:50.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 8 22:18:50.489: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7226 /api/v1/namespaces/watch-7226/configmaps/e2e-watch-test-watch-closed 67c6bcc1-e113-401f-8482-fe1848e8724b 14551580 0 2020-05-08 22:18:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 8 22:18:50.489: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7226 /api/v1/namespaces/watch-7226/configmaps/e2e-watch-test-watch-closed 67c6bcc1-e113-401f-8482-fe1848e8724b 14551584 0 2020-05-08 22:18:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 8 22:18:50.556: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7226 /api/v1/namespaces/watch-7226/configmaps/e2e-watch-test-watch-closed 67c6bcc1-e113-401f-8482-fe1848e8724b 14551590 0 2020-05-08 22:18:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 8 22:18:50.557: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7226 /api/v1/namespaces/watch-7226/configmaps/e2e-watch-test-watch-closed 67c6bcc1-e113-401f-8482-fe1848e8724b 14551591 0 2020-05-08 22:18:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:18:50.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7226" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":230,"skipped":3879,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:18:50.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 8 22:18:57.238: INFO: Pod pod-hostip-f756ac60-7f35-46ff-a215-549006513b49 has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:18:57.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7801" for this suite. • [SLOW TEST:6.665 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3887,"failed":0} S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:18:57.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:18:57.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8481" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":232,"skipped":3888,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:18:57.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:18:57.514: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 8 22:19:02.585: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 8 22:19:04.593: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 8 22:19:04.649: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7528 /apis/apps/v1/namespaces/deployment-7528/deployments/test-cleanup-deployment f37456ab-dd80-4dc7-b7db-afb483b1b9b4 14551748 1 2020-05-08 22:19:04 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e74b98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 8 22:19:04.676: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-7528 /apis/apps/v1/namespaces/deployment-7528/replicasets/test-cleanup-deployment-55ffc6b7b6 31492220-5e47-40ab-adbf-8ab3cdf4b0be 14551750 1 2020-05-08 22:19:04 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment f37456ab-dd80-4dc7-b7db-afb483b1b9b4 0xc001fa8737 0xc001fa8738}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001fa87a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 8 22:19:04.676: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 8 22:19:04.676: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-7528 /apis/apps/v1/namespaces/deployment-7528/replicasets/test-cleanup-controller ef46ea9f-00c8-4351-8780-96b5ba19f428 14551749 1 2020-05-08 22:18:57 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment f37456ab-dd80-4dc7-b7db-afb483b1b9b4 0xc001fa8667 0xc001fa8668}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001fa86c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 8 22:19:04.819: INFO: Pod "test-cleanup-controller-j45hp" is available: &Pod{ObjectMeta:{test-cleanup-controller-j45hp test-cleanup-controller- deployment-7528 /api/v1/namespaces/deployment-7528/pods/test-cleanup-controller-j45hp 8cb591df-3be6-4c3d-ac27-1a7a02328517 14551735 0 2020-05-08 22:18:57 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller ef46ea9f-00c8-4351-8780-96b5ba19f428 0xc001fa8be7 0xc001fa8be8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ggjr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ggjr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ggjr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:18:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:19:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:19:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:18:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.154,StartTime:2020-05-08 22:18:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 22:19:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4d061f2dafbfba2f716f2e52bb12a7dc99e591cd7d186b6795c41e671b0ebcc1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.154,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:19:04.819: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-n878p" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-n878p test-cleanup-deployment-55ffc6b7b6- deployment-7528 /api/v1/namespaces/deployment-7528/pods/test-cleanup-deployment-55ffc6b7b6-n878p a9e01666-9deb-4402-88e1-5ddc12e86cde 14551756 0 2020-05-08 22:19:04 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 31492220-5e47-40ab-adbf-8ab3cdf4b0be 0xc001fa8d77 0xc001fa8d78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ggjr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ggjr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ggjr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:19:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:19:04.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7528" for this suite. • [SLOW TEST:7.556 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":233,"skipped":3907,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:19:04.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:19:05.277: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 8 22:19:05.338: INFO: Number of nodes with available pods: 0 May 8 22:19:05.338: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 8 22:19:05.670: INFO: Number of nodes with available pods: 0 May 8 22:19:05.670: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:06.753: INFO: Number of nodes with available pods: 0 May 8 22:19:06.753: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:07.705: INFO: Number of nodes with available pods: 0 May 8 22:19:07.705: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:08.723: INFO: Number of nodes with available pods: 0 May 8 22:19:08.723: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:09.675: INFO: Number of nodes with available pods: 1 May 8 22:19:09.675: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 8 22:19:09.735: INFO: Number of nodes with available pods: 1 May 8 22:19:09.735: INFO: Number of running nodes: 0, number of available pods: 1 May 8 22:19:10.739: INFO: Number of nodes with available pods: 0 May 8 22:19:10.739: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 8 22:19:10.811: INFO: Number of nodes with available pods: 0 May 8 22:19:10.811: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:11.816: INFO: Number of nodes with available pods: 0 May 8 22:19:11.816: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:12.831: INFO: Number of nodes with available pods: 0 May 8 22:19:12.831: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:13.816: INFO: Number of nodes with available pods: 0 May 8 22:19:13.816: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:14.829: INFO: Number of nodes with available pods: 0 May 8 22:19:14.829: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:15.816: INFO: Number of nodes with available pods: 0 May 8 22:19:15.817: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:16.816: INFO: Number of nodes with available pods: 0 May 8 22:19:16.816: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:17.816: INFO: Number of nodes with available pods: 0 May 8 22:19:17.816: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:18.816: INFO: Number of nodes with available pods: 0 May 8 22:19:18.816: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:19.815: INFO: Number of nodes with available pods: 0 May 8 22:19:19.815: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:20.816: INFO: Number of nodes with available pods: 0 May 8 22:19:20.816: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:22.371: INFO: Number of nodes with available pods: 0 May 8 22:19:22.371: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:22.816: INFO: Number of nodes with available pods: 0 May 8 22:19:22.816: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:23.815: INFO: Number of nodes with available pods: 0 May 8 22:19:23.815: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:19:24.814: INFO: Number of nodes with available pods: 1 May 8 22:19:24.814: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7234, will wait for the garbage collector to delete the pods May 8 22:19:24.875: INFO: Deleting DaemonSet.extensions daemon-set took: 5.950026ms May 8 22:19:24.975: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.24909ms May 8 22:19:28.080: INFO: Number of nodes with available pods: 0 May 8 22:19:28.080: INFO: Number of running nodes: 0, number of available pods: 0 May 8 22:19:28.083: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7234/daemonsets","resourceVersion":"14551913"},"items":null} May 8 22:19:28.086: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7234/pods","resourceVersion":"14551913"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:19:28.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7234" for this suite. • [SLOW TEST:23.149 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":234,"skipped":3940,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:19:28.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:19:43.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1744" for this suite. STEP: Destroying namespace "nsdeletetest-3930" for this suite. May 8 22:19:43.402: INFO: Namespace nsdeletetest-3930 was already deleted STEP: Destroying namespace "nsdeletetest-1651" for this suite. • [SLOW TEST:15.257 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":235,"skipped":3952,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:19:43.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 8 22:19:43.536: INFO: Waiting up to 5m0s for pod "var-expansion-74391c69-50f7-46b2-90b6-bb6f4879c718" in namespace "var-expansion-9361" to be "success or failure" May 8 22:19:43.558: INFO: Pod "var-expansion-74391c69-50f7-46b2-90b6-bb6f4879c718": Phase="Pending", Reason="", readiness=false. Elapsed: 21.651072ms May 8 22:19:45.562: INFO: Pod "var-expansion-74391c69-50f7-46b2-90b6-bb6f4879c718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026414279s May 8 22:19:47.567: INFO: Pod "var-expansion-74391c69-50f7-46b2-90b6-bb6f4879c718": Phase="Running", Reason="", readiness=true. Elapsed: 4.030933774s May 8 22:19:49.571: INFO: Pod "var-expansion-74391c69-50f7-46b2-90b6-bb6f4879c718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035024279s STEP: Saw pod success May 8 22:19:49.571: INFO: Pod "var-expansion-74391c69-50f7-46b2-90b6-bb6f4879c718" satisfied condition "success or failure" May 8 22:19:49.574: INFO: Trying to get logs from node jerma-worker pod var-expansion-74391c69-50f7-46b2-90b6-bb6f4879c718 container dapi-container: STEP: delete the pod May 8 22:19:49.612: INFO: Waiting for pod var-expansion-74391c69-50f7-46b2-90b6-bb6f4879c718 to disappear May 8 22:19:49.632: INFO: Pod var-expansion-74391c69-50f7-46b2-90b6-bb6f4879c718 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:19:49.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9361" for this suite. • [SLOW TEST:6.236 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3962,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:19:49.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 8 22:19:49.707: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:19:58.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1879" for this suite. • [SLOW TEST:8.977 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":237,"skipped":3973,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:19:58.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-5ad228d4-aa27-43aa-9b54-1f5cbc2de417 STEP: Creating secret with name s-test-opt-upd-89d36059-4e72-42e8-90c7-f74faec2fe96 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5ad228d4-aa27-43aa-9b54-1f5cbc2de417 STEP: Updating secret s-test-opt-upd-89d36059-4e72-42e8-90c7-f74faec2fe96 STEP: Creating secret with name s-test-opt-create-17f11864-7d44-443a-9619-7ff6947361ea STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:20:07.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5475" for this suite. • [SLOW TEST:8.433 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3977,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:20:07.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:20:07.146: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 8 22:20:07.164: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:07.220: INFO: Number of nodes with available pods: 0 May 8 22:20:07.220: INFO: Node jerma-worker is running more than one daemon pod May 8 22:20:08.225: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:08.229: INFO: Number of nodes with available pods: 0 May 8 22:20:08.229: INFO: Node jerma-worker is running more than one daemon pod May 8 22:20:09.464: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:09.483: INFO: Number of nodes with available pods: 0 May 8 22:20:09.483: INFO: Node jerma-worker is running more than one daemon pod May 8 22:20:10.263: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:10.266: INFO: Number of nodes with available pods: 0 May 8 22:20:10.266: INFO: Node jerma-worker is running more than one daemon pod May 8 22:20:11.224: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:11.228: INFO: Number of nodes with available pods: 2 May 8 22:20:11.228: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 8 22:20:11.299: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:11.299: INFO: Wrong image for pod: daemon-set-wmz8f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:11.327: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:12.514: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:12.514: INFO: Wrong image for pod: daemon-set-wmz8f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:12.647: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:13.400: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:13.400: INFO: Wrong image for pod: daemon-set-wmz8f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:13.412: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:14.332: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:14.332: INFO: Wrong image for pod: daemon-set-wmz8f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:14.337: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:15.332: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:15.332: INFO: Wrong image for pod: daemon-set-wmz8f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:15.332: INFO: Pod daemon-set-wmz8f is not available May 8 22:20:15.337: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:16.332: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:16.332: INFO: Pod daemon-set-888ds is not available May 8 22:20:16.337: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:17.332: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:17.332: INFO: Pod daemon-set-888ds is not available May 8 22:20:17.386: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:18.331: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:18.331: INFO: Pod daemon-set-888ds is not available May 8 22:20:18.334: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:19.330: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:19.333: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:20.331: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:20.331: INFO: Pod daemon-set-7s9xf is not available May 8 22:20:20.335: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:21.335: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:21.335: INFO: Pod daemon-set-7s9xf is not available May 8 22:20:21.340: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:22.332: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:22.332: INFO: Pod daemon-set-7s9xf is not available May 8 22:20:22.336: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:23.332: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:23.332: INFO: Pod daemon-set-7s9xf is not available May 8 22:20:23.336: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:24.331: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:24.331: INFO: Pod daemon-set-7s9xf is not available May 8 22:20:24.335: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:25.331: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:25.331: INFO: Pod daemon-set-7s9xf is not available May 8 22:20:25.336: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:26.332: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:26.332: INFO: Pod daemon-set-7s9xf is not available May 8 22:20:26.335: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:27.332: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:27.332: INFO: Pod daemon-set-7s9xf is not available May 8 22:20:27.345: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:28.331: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:28.331: INFO: Pod daemon-set-7s9xf is not available May 8 22:20:28.335: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:29.336: INFO: Wrong image for pod: daemon-set-7s9xf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 8 22:20:29.336: INFO: Pod daemon-set-7s9xf is not available May 8 22:20:29.340: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:30.331: INFO: Pod daemon-set-f29ng is not available May 8 22:20:30.336: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 8 22:20:30.340: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:30.343: INFO: Number of nodes with available pods: 1 May 8 22:20:30.343: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:20:31.623: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:31.631: INFO: Number of nodes with available pods: 1 May 8 22:20:31.631: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:20:32.347: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:32.350: INFO: Number of nodes with available pods: 1 May 8 22:20:32.350: INFO: Node jerma-worker2 is running more than one daemon pod May 8 22:20:33.348: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 22:20:33.352: INFO: Number of nodes with available pods: 2 May 8 22:20:33.352: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5178, will wait for the garbage collector to delete the pods May 8 22:20:33.430: INFO: Deleting DaemonSet.extensions daemon-set took: 7.044527ms May 8 22:20:33.731: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.249573ms May 8 22:20:39.534: INFO: Number of nodes with available pods: 0 May 8 22:20:39.534: INFO: Number of running nodes: 0, number of available pods: 0 May 8 22:20:39.537: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5178/daemonsets","resourceVersion":"14552374"},"items":null} May 8 22:20:39.539: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5178/pods","resourceVersion":"14552374"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:20:39.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5178" for this suite. • [SLOW TEST:32.503 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":239,"skipped":3981,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:20:39.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-35b7af0a-76a4-4c8e-807a-36379e85031d STEP: Creating a pod to test consume configMaps May 8 22:20:39.641: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ada06af-18d4-48c3-8297-4f00678e01c9" in namespace "configmap-8946" to be "success or failure" May 8 22:20:39.646: INFO: Pod "pod-configmaps-4ada06af-18d4-48c3-8297-4f00678e01c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.368863ms May 8 22:20:41.649: INFO: Pod "pod-configmaps-4ada06af-18d4-48c3-8297-4f00678e01c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007994949s May 8 22:20:43.653: INFO: Pod "pod-configmaps-4ada06af-18d4-48c3-8297-4f00678e01c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01172573s STEP: Saw pod success May 8 22:20:43.653: INFO: Pod "pod-configmaps-4ada06af-18d4-48c3-8297-4f00678e01c9" satisfied condition "success or failure" May 8 22:20:43.656: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-4ada06af-18d4-48c3-8297-4f00678e01c9 container configmap-volume-test: STEP: delete the pod May 8 22:20:43.677: INFO: Waiting for pod pod-configmaps-4ada06af-18d4-48c3-8297-4f00678e01c9 to disappear May 8 22:20:43.754: INFO: Pod pod-configmaps-4ada06af-18d4-48c3-8297-4f00678e01c9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:20:43.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8946" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3985,"failed":0} SSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:20:43.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6969 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6969;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6969 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6969;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6969.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6969.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6969.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6969.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6969.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6969.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6969.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6969.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6969.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6969.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6969.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6969.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6969.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 89.163.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.163.89_udp@PTR;check="$$(dig +tcp +noall +answer +search 89.163.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.163.89_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6969 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6969;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6969 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6969;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6969.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6969.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6969.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6969.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6969.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6969.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6969.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6969.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6969.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6969.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6969.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6969.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6969.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 89.163.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.163.89_udp@PTR;check="$$(dig +tcp +noall +answer +search 89.163.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.163.89_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 22:20:50.067: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.070: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.073: INFO: Unable to read wheezy_udp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.076: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.080: INFO: Unable to read wheezy_udp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.083: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.087: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.090: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.112: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.115: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.118: INFO: Unable to read jessie_udp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.121: INFO: Unable to read jessie_tcp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.124: INFO: Unable to read jessie_udp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.127: INFO: Unable to read jessie_tcp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.130: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.132: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:50.148: INFO: Lookups using dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6969 wheezy_tcp@dns-test-service.dns-6969 wheezy_udp@dns-test-service.dns-6969.svc wheezy_tcp@dns-test-service.dns-6969.svc wheezy_udp@_http._tcp.dns-test-service.dns-6969.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6969.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6969 jessie_tcp@dns-test-service.dns-6969 jessie_udp@dns-test-service.dns-6969.svc jessie_tcp@dns-test-service.dns-6969.svc jessie_udp@_http._tcp.dns-test-service.dns-6969.svc jessie_tcp@_http._tcp.dns-test-service.dns-6969.svc] May 8 22:20:55.153: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.156: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.159: INFO: Unable to read wheezy_udp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.162: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.165: INFO: Unable to read wheezy_udp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.168: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.171: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.174: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.195: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.198: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.200: INFO: Unable to read jessie_udp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.203: INFO: Unable to read jessie_tcp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.207: INFO: Unable to read jessie_udp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.210: INFO: Unable to read jessie_tcp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.213: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.216: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:20:55.286: INFO: Lookups using dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6969 wheezy_tcp@dns-test-service.dns-6969 wheezy_udp@dns-test-service.dns-6969.svc wheezy_tcp@dns-test-service.dns-6969.svc wheezy_udp@_http._tcp.dns-test-service.dns-6969.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6969.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6969 jessie_tcp@dns-test-service.dns-6969 jessie_udp@dns-test-service.dns-6969.svc jessie_tcp@dns-test-service.dns-6969.svc jessie_udp@_http._tcp.dns-test-service.dns-6969.svc jessie_tcp@_http._tcp.dns-test-service.dns-6969.svc] May 8 22:21:00.153: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.157: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.160: INFO: Unable to read wheezy_udp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.163: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.166: INFO: Unable to read wheezy_udp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.171: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.174: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.176: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.192: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.195: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.198: INFO: Unable to read jessie_udp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.201: INFO: Unable to read jessie_tcp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.203: INFO: Unable to read jessie_udp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.207: INFO: Unable to read jessie_tcp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.210: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.213: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:00.233: INFO: Lookups using dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6969 wheezy_tcp@dns-test-service.dns-6969 wheezy_udp@dns-test-service.dns-6969.svc wheezy_tcp@dns-test-service.dns-6969.svc wheezy_udp@_http._tcp.dns-test-service.dns-6969.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6969.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6969 jessie_tcp@dns-test-service.dns-6969 jessie_udp@dns-test-service.dns-6969.svc jessie_tcp@dns-test-service.dns-6969.svc jessie_udp@_http._tcp.dns-test-service.dns-6969.svc jessie_tcp@_http._tcp.dns-test-service.dns-6969.svc] May 8 22:21:05.152: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.156: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.159: INFO: Unable to read wheezy_udp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.194: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.198: INFO: Unable to read wheezy_udp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.201: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.204: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.206: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.225: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.228: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.230: INFO: Unable to read jessie_udp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.232: INFO: Unable to read jessie_tcp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.234: INFO: Unable to read jessie_udp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.236: INFO: Unable to read jessie_tcp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.239: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.241: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:05.258: INFO: Lookups using dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6969 wheezy_tcp@dns-test-service.dns-6969 wheezy_udp@dns-test-service.dns-6969.svc wheezy_tcp@dns-test-service.dns-6969.svc wheezy_udp@_http._tcp.dns-test-service.dns-6969.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6969.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6969 jessie_tcp@dns-test-service.dns-6969 jessie_udp@dns-test-service.dns-6969.svc jessie_tcp@dns-test-service.dns-6969.svc jessie_udp@_http._tcp.dns-test-service.dns-6969.svc jessie_tcp@_http._tcp.dns-test-service.dns-6969.svc] May 8 22:21:10.153: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.157: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.160: INFO: Unable to read wheezy_udp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.162: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.166: INFO: Unable to read wheezy_udp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.168: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.171: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.173: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.194: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.196: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.199: INFO: Unable to read jessie_udp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.202: INFO: Unable to read jessie_tcp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.205: INFO: Unable to read jessie_udp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.208: INFO: Unable to read jessie_tcp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.210: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.213: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:10.232: INFO: Lookups using dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6969 wheezy_tcp@dns-test-service.dns-6969 wheezy_udp@dns-test-service.dns-6969.svc wheezy_tcp@dns-test-service.dns-6969.svc wheezy_udp@_http._tcp.dns-test-service.dns-6969.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6969.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6969 jessie_tcp@dns-test-service.dns-6969 jessie_udp@dns-test-service.dns-6969.svc jessie_tcp@dns-test-service.dns-6969.svc jessie_udp@_http._tcp.dns-test-service.dns-6969.svc jessie_tcp@_http._tcp.dns-test-service.dns-6969.svc] May 8 22:21:15.153: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.156: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.160: INFO: Unable to read wheezy_udp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.163: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.167: INFO: Unable to read wheezy_udp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.170: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.173: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.176: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.200: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.221: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.239: INFO: Unable to read jessie_udp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.242: INFO: Unable to read jessie_tcp@dns-test-service.dns-6969 from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.246: INFO: Unable to read jessie_udp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.249: INFO: Unable to read jessie_tcp@dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.252: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.255: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6969.svc from pod dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896: the server could not find the requested resource (get pods dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896) May 8 22:21:15.273: INFO: Lookups using dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6969 wheezy_tcp@dns-test-service.dns-6969 wheezy_udp@dns-test-service.dns-6969.svc wheezy_tcp@dns-test-service.dns-6969.svc wheezy_udp@_http._tcp.dns-test-service.dns-6969.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6969.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6969 jessie_tcp@dns-test-service.dns-6969 jessie_udp@dns-test-service.dns-6969.svc jessie_tcp@dns-test-service.dns-6969.svc jessie_udp@_http._tcp.dns-test-service.dns-6969.svc jessie_tcp@_http._tcp.dns-test-service.dns-6969.svc] May 8 22:21:20.232: INFO: DNS probes using dns-6969/dns-test-5fe7401a-ea50-44fc-9151-7ef758c94896 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:21:21.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6969" for this suite. • [SLOW TEST:37.321 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":241,"skipped":3988,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:21:21.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 8 22:21:21.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-9842 -- logs-generator --log-lines-total 100 --run-duration 20s' May 8 22:21:25.270: INFO: stderr: "" May 8 22:21:25.270: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 8 22:21:25.270: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 8 22:21:25.270: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-9842" to be "running and ready, or succeeded" May 8 22:21:25.275: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.685346ms May 8 22:21:27.278: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007751386s May 8 22:21:29.281: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.010963341s May 8 22:21:29.281: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 8 22:21:29.281: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 8 22:21:29.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9842' May 8 22:21:29.411: INFO: stderr: "" May 8 22:21:29.411: INFO: stdout: "I0508 22:21:27.647862 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/kwq 505\nI0508 22:21:27.848042 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/rjnp 490\nI0508 22:21:28.048025 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/4n5 486\nI0508 22:21:28.248158 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/q24 578\nI0508 22:21:28.447990 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/k2xn 334\nI0508 22:21:28.648049 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/whd 280\nI0508 22:21:28.848068 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/h98c 473\nI0508 22:21:29.048066 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/p7q2 507\nI0508 22:21:29.248050 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/rlm 536\n" STEP: limiting log lines May 8 22:21:29.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9842 --tail=1' May 8 22:21:29.530: INFO: stderr: "" May 8 22:21:29.530: INFO: stdout: "I0508 22:21:29.448032 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/pkfv 460\n" May 8 22:21:29.530: INFO: got output "I0508 22:21:29.448032 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/pkfv 460\n" STEP: limiting log bytes May 8 22:21:29.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9842 --limit-bytes=1' May 8 22:21:29.650: INFO: stderr: "" May 8 22:21:29.650: INFO: stdout: "I" May 8 22:21:29.650: INFO: got output "I" STEP: exposing timestamps May 8 22:21:29.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9842 --tail=1 --timestamps' May 8 22:21:29.765: INFO: stderr: "" May 8 22:21:29.765: INFO: stdout: "2020-05-08T22:21:29.648245084Z I0508 22:21:29.648081 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/k2n 482\n" May 8 22:21:29.765: INFO: got output "2020-05-08T22:21:29.648245084Z I0508 22:21:29.648081 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/k2n 482\n" STEP: restricting to a time range May 8 22:21:32.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9842 --since=1s' May 8 22:21:32.380: INFO: stderr: "" May 8 22:21:32.380: INFO: stdout: "I0508 22:21:31.448095 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/2z4 362\nI0508 22:21:31.648079 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/dff 220\nI0508 22:21:31.848040 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/552p 471\nI0508 22:21:32.048052 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/4kb 596\nI0508 22:21:32.248043 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/572k 497\n" May 8 22:21:32.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9842 --since=24h' May 8 22:21:32.495: INFO: stderr: "" May 8 22:21:32.495: INFO: stdout: "I0508 22:21:27.647862 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/kwq 505\nI0508 22:21:27.848042 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/rjnp 490\nI0508 22:21:28.048025 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/4n5 486\nI0508 22:21:28.248158 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/q24 578\nI0508 22:21:28.447990 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/k2xn 334\nI0508 22:21:28.648049 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/whd 280\nI0508 22:21:28.848068 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/h98c 473\nI0508 22:21:29.048066 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/p7q2 507\nI0508 22:21:29.248050 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/rlm 536\nI0508 22:21:29.448032 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/pkfv 460\nI0508 22:21:29.648081 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/k2n 482\nI0508 22:21:29.848095 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/t89 232\nI0508 22:21:30.048018 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/w98m 546\nI0508 22:21:30.248051 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/kgh 550\nI0508 22:21:30.448020 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/stp 550\nI0508 22:21:30.648039 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/pxjs 581\nI0508 22:21:30.848063 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/wjtd 417\nI0508 22:21:31.048060 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/9n46 323\nI0508 22:21:31.248061 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/9v7 568\nI0508 22:21:31.448095 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/2z4 362\nI0508 22:21:31.648079 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/dff 220\nI0508 22:21:31.848040 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/552p 471\nI0508 22:21:32.048052 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/4kb 596\nI0508 22:21:32.248043 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/572k 497\nI0508 22:21:32.448055 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/g4rh 339\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 8 22:21:32.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-9842' May 8 22:21:39.242: INFO: stderr: "" May 8 22:21:39.242: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:21:39.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9842" for this suite. • [SLOW TEST:18.172 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":242,"skipped":3993,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:21:39.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 8 22:21:39.291: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 22:21:39.340: INFO: Waiting for terminating namespaces to be deleted... May 8 22:21:39.343: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 8 22:21:39.348: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 22:21:39.348: INFO: Container kindnet-cni ready: true, restart count 0 May 8 22:21:39.348: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 22:21:39.348: INFO: Container kube-proxy ready: true, restart count 0 May 8 22:21:39.348: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 8 22:21:39.353: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 22:21:39.353: INFO: Container kindnet-cni ready: true, restart count 0 May 8 22:21:39.353: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 8 22:21:39.353: INFO: Container kube-bench ready: false, restart count 0 May 8 22:21:39.353: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 8 22:21:39.353: INFO: Container kube-proxy ready: true, restart count 0 May 8 22:21:39.353: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 8 22:21:39.353: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1f106912-9b0a-4208-bc17-f11c63d07d47 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-1f106912-9b0a-4208-bc17-f11c63d07d47 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-1f106912-9b0a-4208-bc17-f11c63d07d47 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:21:55.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5154" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.302 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":243,"skipped":3993,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:21:55.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:21:55.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7858' May 8 22:21:55.871: INFO: stderr: "" May 8 22:21:55.871: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 8 22:21:55.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7858' May 8 22:21:56.151: INFO: stderr: "" May 8 22:21:56.151: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 8 22:21:57.155: INFO: Selector matched 1 pods for map[app:agnhost] May 8 22:21:57.155: INFO: Found 0 / 1 May 8 22:21:58.154: INFO: Selector matched 1 pods for map[app:agnhost] May 8 22:21:58.154: INFO: Found 0 / 1 May 8 22:21:59.156: INFO: Selector matched 1 pods for map[app:agnhost] May 8 22:21:59.156: INFO: Found 1 / 1 May 8 22:21:59.156: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 8 22:21:59.159: INFO: Selector matched 1 pods for map[app:agnhost] May 8 22:21:59.160: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 8 22:21:59.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-dlcfr --namespace=kubectl-7858' May 8 22:21:59.272: INFO: stderr: "" May 8 22:21:59.272: INFO: stdout: "Name: agnhost-master-dlcfr\nNamespace: kubectl-7858\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Fri, 08 May 2020 22:21:55 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.160\nIPs:\n IP: 10.244.1.160\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://03590c2f1bc07bcc5cd7b39f1724ae89cc6b37a845a29464d578bd7e28399d06\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 08 May 2020 22:21:58 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4twvs (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4twvs:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4twvs\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-7858/agnhost-master-dlcfr to jerma-worker\n Normal Pulled 2s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker Started container agnhost-master\n" May 8 22:21:59.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7858' May 8 22:21:59.383: INFO: stderr: "" May 8 22:21:59.383: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7858\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-dlcfr\n" May 8 22:21:59.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7858' May 8 22:21:59.492: INFO: stderr: "" May 8 22:21:59.492: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7858\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.99.110.229\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.160:6379\nSession Affinity: None\nEvents: \n" May 8 22:21:59.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 8 22:21:59.624: INFO: stderr: "" May 8 22:21:59.624: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Fri, 08 May 2020 22:21:52 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 08 May 2020 22:21:58 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 08 May 2020 22:21:58 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 08 May 2020 22:21:58 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 08 May 2020 22:21:58 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 54d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 54d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 54d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 54d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 54d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 54d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 54d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 54d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 54d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 8 22:21:59.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7858' May 8 22:21:59.732: INFO: stderr: "" May 8 22:21:59.732: INFO: stdout: "Name: kubectl-7858\nLabels: e2e-framework=kubectl\n e2e-run=19893d35-9655-4ea1-b4ec-d41cbe464f30\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:21:59.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7858" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":244,"skipped":3996,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:21:59.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-4247/secret-test-71dd90ad-0bbb-4877-8af4-4717bd1f8f3d STEP: Creating a pod to test consume secrets May 8 22:21:59.813: INFO: Waiting up to 5m0s for pod "pod-configmaps-01ea7181-98f9-454d-9f54-32c60c7912a1" in namespace "secrets-4247" to be "success or failure" May 8 22:21:59.844: INFO: Pod "pod-configmaps-01ea7181-98f9-454d-9f54-32c60c7912a1": Phase="Pending", Reason="", readiness=false. Elapsed: 31.765478ms May 8 22:22:01.971: INFO: Pod "pod-configmaps-01ea7181-98f9-454d-9f54-32c60c7912a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158309197s May 8 22:22:04.252: INFO: Pod "pod-configmaps-01ea7181-98f9-454d-9f54-32c60c7912a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.439484679s May 8 22:22:06.256: INFO: Pod "pod-configmaps-01ea7181-98f9-454d-9f54-32c60c7912a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.443203601s STEP: Saw pod success May 8 22:22:06.256: INFO: Pod "pod-configmaps-01ea7181-98f9-454d-9f54-32c60c7912a1" satisfied condition "success or failure" May 8 22:22:06.258: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-01ea7181-98f9-454d-9f54-32c60c7912a1 container env-test: STEP: delete the pod May 8 22:22:06.306: INFO: Waiting for pod pod-configmaps-01ea7181-98f9-454d-9f54-32c60c7912a1 to disappear May 8 22:22:06.335: INFO: Pod pod-configmaps-01ea7181-98f9-454d-9f54-32c60c7912a1 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:22:06.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4247" for this suite. • [SLOW TEST:6.604 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":3997,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:22:06.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 8 22:22:06.397: INFO: Waiting up to 5m0s for pod "pod-f38a93dc-dbd8-4658-b0d9-a04380435291" in namespace "emptydir-5084" to be "success or failure" May 8 22:22:06.401: INFO: Pod "pod-f38a93dc-dbd8-4658-b0d9-a04380435291": Phase="Pending", Reason="", readiness=false. Elapsed: 3.516195ms May 8 22:22:08.404: INFO: Pod "pod-f38a93dc-dbd8-4658-b0d9-a04380435291": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006440871s May 8 22:22:10.408: INFO: Pod "pod-f38a93dc-dbd8-4658-b0d9-a04380435291": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010334792s STEP: Saw pod success May 8 22:22:10.408: INFO: Pod "pod-f38a93dc-dbd8-4658-b0d9-a04380435291" satisfied condition "success or failure" May 8 22:22:10.411: INFO: Trying to get logs from node jerma-worker pod pod-f38a93dc-dbd8-4658-b0d9-a04380435291 container test-container: STEP: delete the pod May 8 22:22:10.469: INFO: Waiting for pod pod-f38a93dc-dbd8-4658-b0d9-a04380435291 to disappear May 8 22:22:10.480: INFO: Pod pod-f38a93dc-dbd8-4658-b0d9-a04380435291 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:22:10.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5084" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4037,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:22:10.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 8 22:22:10.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6387' May 8 22:22:10.640: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 8 22:22:10.640: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 8 22:22:10.678: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-z6lzv] May 8 22:22:10.679: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-z6lzv" in namespace "kubectl-6387" to be "running and ready" May 8 22:22:10.681: INFO: Pod "e2e-test-httpd-rc-z6lzv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.514068ms May 8 22:22:12.684: INFO: Pod "e2e-test-httpd-rc-z6lzv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005790308s May 8 22:22:14.688: INFO: Pod "e2e-test-httpd-rc-z6lzv": Phase="Running", Reason="", readiness=true. Elapsed: 4.009487969s May 8 22:22:14.688: INFO: Pod "e2e-test-httpd-rc-z6lzv" satisfied condition "running and ready" May 8 22:22:14.688: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-z6lzv] May 8 22:22:14.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-6387' May 8 22:22:14.818: INFO: stderr: "" May 8 22:22:14.818: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.163. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.163. Set the 'ServerName' directive globally to suppress this message\n[Fri May 08 22:22:13.331004 2020] [mpm_event:notice] [pid 1:tid 140250714291048] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri May 08 22:22:13.331055 2020] [core:notice] [pid 1:tid 140250714291048] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 8 22:22:14.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6387' May 8 22:22:15.008: INFO: stderr: "" May 8 22:22:15.008: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:22:15.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6387" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":247,"skipped":4040,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:22:15.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-26979c6e-13d8-4500-961a-e0b850950bcc STEP: Creating a pod to test consume secrets May 8 22:22:15.136: INFO: Waiting up to 5m0s for pod "pod-secrets-fc649d7e-b80a-4746-bad9-a485388767af" in namespace "secrets-9887" to be "success or failure" May 8 22:22:15.139: INFO: Pod "pod-secrets-fc649d7e-b80a-4746-bad9-a485388767af": Phase="Pending", Reason="", readiness=false. Elapsed: 3.668964ms May 8 22:22:17.144: INFO: Pod "pod-secrets-fc649d7e-b80a-4746-bad9-a485388767af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008432649s May 8 22:22:19.149: INFO: Pod "pod-secrets-fc649d7e-b80a-4746-bad9-a485388767af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013322087s STEP: Saw pod success May 8 22:22:19.149: INFO: Pod "pod-secrets-fc649d7e-b80a-4746-bad9-a485388767af" satisfied condition "success or failure" May 8 22:22:19.152: INFO: Trying to get logs from node jerma-worker pod pod-secrets-fc649d7e-b80a-4746-bad9-a485388767af container secret-volume-test: STEP: delete the pod May 8 22:22:19.254: INFO: Waiting for pod pod-secrets-fc649d7e-b80a-4746-bad9-a485388767af to disappear May 8 22:22:19.318: INFO: Pod pod-secrets-fc649d7e-b80a-4746-bad9-a485388767af no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:22:19.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9887" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4045,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:22:19.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:22:19.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-8130" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":249,"skipped":4088,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:22:19.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:22:19.647: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.47611ms) May 8 22:22:19.651: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.403968ms) May 8 22:22:19.654: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.075804ms) May 8 22:22:19.657: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.353304ms) May 8 22:22:19.660: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.015221ms) May 8 22:22:19.664: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.445074ms) May 8 22:22:19.667: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.524559ms) May 8 22:22:19.670: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.905893ms) May 8 22:22:19.674: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.178298ms) May 8 22:22:19.678: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.92684ms) May 8 22:22:19.681: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.386564ms) May 8 22:22:19.685: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.26181ms) May 8 22:22:19.689: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.885595ms) May 8 22:22:19.692: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.163063ms) May 8 22:22:19.696: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.076511ms) May 8 22:22:19.699: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.363668ms) May 8 22:22:19.702: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.871904ms) May 8 22:22:19.705: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.976228ms) May 8 22:22:19.708: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.74323ms) May 8 22:22:19.711: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.139481ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:22:19.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5486" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":250,"skipped":4091,"failed":0} SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:22:19.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 8 22:22:24.408: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3408 pod-service-account-222bbba2-3310-4bbf-a3e4-9e68b6c4b9cd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 8 22:22:24.644: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3408 pod-service-account-222bbba2-3310-4bbf-a3e4-9e68b6c4b9cd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 8 22:22:24.935: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3408 pod-service-account-222bbba2-3310-4bbf-a3e4-9e68b6c4b9cd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:22:25.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3408" for this suite. • [SLOW TEST:5.435 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":251,"skipped":4098,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:22:25.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-4425c580-746e-4189-9c12-14f1e9a8e557 STEP: Creating secret with name secret-projected-all-test-volume-e29c9c09-597c-4328-a808-cc6382d7bc54 STEP: Creating a pod to test Check all projections for projected volume plugin May 8 22:22:25.274: INFO: Waiting up to 5m0s for pod "projected-volume-566278cb-117f-4eab-b048-ce68a6fda326" in namespace "projected-7852" to be "success or failure" May 8 22:22:25.277: INFO: Pod "projected-volume-566278cb-117f-4eab-b048-ce68a6fda326": Phase="Pending", Reason="", readiness=false. Elapsed: 3.530322ms May 8 22:22:27.281: INFO: Pod "projected-volume-566278cb-117f-4eab-b048-ce68a6fda326": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007390721s May 8 22:22:29.285: INFO: Pod "projected-volume-566278cb-117f-4eab-b048-ce68a6fda326": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010988567s STEP: Saw pod success May 8 22:22:29.285: INFO: Pod "projected-volume-566278cb-117f-4eab-b048-ce68a6fda326" satisfied condition "success or failure" May 8 22:22:29.288: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-566278cb-117f-4eab-b048-ce68a6fda326 container projected-all-volume-test: STEP: delete the pod May 8 22:22:29.351: INFO: Waiting for pod projected-volume-566278cb-117f-4eab-b048-ce68a6fda326 to disappear May 8 22:22:29.362: INFO: Pod projected-volume-566278cb-117f-4eab-b048-ce68a6fda326 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:22:29.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7852" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4110,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:22:29.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:22:29.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-7709" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":253,"skipped":4131,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:22:29.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:22:29.515: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 8 22:22:32.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3755 create -f -' May 8 22:22:36.202: INFO: stderr: "" May 8 22:22:36.202: INFO: stdout: "e2e-test-crd-publish-openapi-1558-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 8 22:22:36.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3755 delete e2e-test-crd-publish-openapi-1558-crds test-cr' May 8 22:22:36.306: INFO: stderr: "" May 8 22:22:36.306: INFO: stdout: "e2e-test-crd-publish-openapi-1558-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 8 22:22:36.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3755 apply -f -' May 8 22:22:36.558: INFO: stderr: "" May 8 22:22:36.558: INFO: stdout: "e2e-test-crd-publish-openapi-1558-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 8 22:22:36.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3755 delete e2e-test-crd-publish-openapi-1558-crds test-cr' May 8 22:22:36.816: INFO: stderr: "" May 8 22:22:36.816: INFO: stdout: "e2e-test-crd-publish-openapi-1558-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 8 22:22:36.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1558-crds' May 8 22:22:37.092: INFO: stderr: "" May 8 22:22:37.092: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1558-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:22:38.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3755" for this suite. • [SLOW TEST:9.533 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":254,"skipped":4152,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:22:38.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:22:55.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2717" for this suite. • [SLOW TEST:16.151 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":255,"skipped":4164,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:22:55.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:22:55.154: INFO: Creating deployment "webserver-deployment" May 8 22:22:55.194: INFO: Waiting for observed generation 1 May 8 22:22:57.301: INFO: Waiting for all required pods to come up May 8 22:22:57.306: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 8 22:23:07.317: INFO: Waiting for deployment "webserver-deployment" to complete May 8 22:23:07.324: INFO: Updating deployment "webserver-deployment" with a non-existent image May 8 22:23:07.331: INFO: Updating deployment webserver-deployment May 8 22:23:07.331: INFO: Waiting for observed generation 2 May 8 22:23:09.341: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 8 22:23:09.388: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 8 22:23:09.391: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 8 22:23:09.583: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 8 22:23:09.583: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 8 22:23:09.586: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 8 22:23:09.590: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 8 22:23:09.590: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 8 22:23:09.596: INFO: Updating deployment webserver-deployment May 8 22:23:09.596: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 8 22:23:09.885: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 8 22:23:09.903: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 8 22:23:10.041: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8318 /apis/apps/v1/namespaces/deployment-8318/deployments/webserver-deployment 9a4fa553-6517-4214-8aee-837d4277c858 14553587 3 2020-05-08 22:22:55 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f16678 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-08 22:23:07 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-08 22:23:09 +0000 UTC,LastTransitionTime:2020-05-08 22:23:09 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 8 22:23:10.115: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-8318 /apis/apps/v1/namespaces/deployment-8318/replicasets/webserver-deployment-c7997dcc8 d6bfe3e3-aeef-4b5f-bb79-43ea5c16fae8 14553572 3 2020-05-08 22:23:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 9a4fa553-6517-4214-8aee-837d4277c858 0xc003f16e47 0xc003f16e48}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f16f48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 8 22:23:10.115: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 8 22:23:10.115: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-8318 /apis/apps/v1/namespaces/deployment-8318/replicasets/webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 14553614 3 2020-05-08 22:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 9a4fa553-6517-4214-8aee-837d4277c858 0xc003f16d27 0xc003f16d28}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f16de8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 8 22:23:10.276: INFO: Pod "webserver-deployment-595b5b9587-48hd8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-48hd8 webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-48hd8 25398808-072d-4923-a7b0-619584282c85 14553621 0 2020-05-08 22:23:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003f17647 0xc003f17648}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.276: INFO: Pod "webserver-deployment-595b5b9587-52d5c" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-52d5c webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-52d5c d9bbc1a3-23ff-48ac-bd12-bef54a4d7959 14553619 0 2020-05-08 22:23:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003f17847 0xc003f17848}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.277: INFO: Pod "webserver-deployment-595b5b9587-5bltv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5bltv webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-5bltv 6c32006d-4c25-42eb-b6d8-eceb32288ba9 14553404 0 2020-05-08 22:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003f17977 0xc003f17978}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.64,StartTime:2020-05-08 22:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 22:23:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://efdbcd797884edc03932ecabcecad5ca3aaa9ea7d4cb1de9868c516e5b36aaf1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.277: INFO: Pod "webserver-deployment-595b5b9587-6z9h2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6z9h2 webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-6z9h2 e6579957-e00b-4f09-988f-107a1f249330 14553597 0 2020-05-08 22:23:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003f17ba7 0xc003f17ba8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.277: INFO: Pod "webserver-deployment-595b5b9587-74fm5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-74fm5 webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-74fm5 21fe2512-6fa9-4edb-a03e-a9fb91ebc305 14553586 0 2020-05-08 22:23:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003f17d77 0xc003f17d78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.278: INFO: Pod "webserver-deployment-595b5b9587-7mwxt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7mwxt webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-7mwxt a3090561-9d1d-4b34-aead-667fd54dbb31 14553500 0 2020-05-08 22:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003f17f07 0xc003f17f08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.67,StartTime:2020-05-08 22:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 22:23:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f53a8c2c571621dd3c04d61b93820be1fb588ecc8bcf2f732e56418095523c8d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.278: INFO: Pod "webserver-deployment-595b5b9587-bzfd9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bzfd9 webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-bzfd9 d407efaa-a089-4e89-be61-ccbb84af306c 14553620 0 2020-05-08 22:23:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003ef6167 0xc003ef6168}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.278: INFO: Pod "webserver-deployment-595b5b9587-cr88q" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cr88q webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-cr88q f9ba67fc-b83a-4066-a1f9-8b32ba01e438 14553448 0 2020-05-08 22:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003ef62f7 0xc003ef62f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.66,StartTime:2020-05-08 22:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 22:23:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0aeb5976fe1724ac0c5fa55e2cc9b6c1c79cb72ee48a32f07a205f8437b545ba,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.278: INFO: Pod "webserver-deployment-595b5b9587-cr9gr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cr9gr webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-cr9gr f0fd7807-cc02-4299-abc0-982ea3c46b02 14553607 0 2020-05-08 22:23:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003ef65b7 0xc003ef65b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.278: INFO: Pod "webserver-deployment-595b5b9587-dbn74" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dbn74 webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-dbn74 df361ad6-8808-412e-88ae-c8732cab0fac 14553490 0 2020-05-08 22:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003ef6767 0xc003ef6768}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.171,StartTime:2020-05-08 22:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 22:23:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b8447e732ba6a9c2dbf6b4a509d240a2f4363057459859439300b1e17910435d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.171,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.279: INFO: Pod "webserver-deployment-595b5b9587-hdrjj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hdrjj webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-hdrjj 21b663b9-2d6b-4248-bac2-b06bd5cb7b2a 14553442 0 2020-05-08 22:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003ef6947 0xc003ef6948}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.170,StartTime:2020-05-08 22:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 22:23:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a9dbff9f7cdb7714387f33672c3ec0d4d1b8b4c1a42acabf0a06efaa796875b7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.170,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.279: INFO: Pod "webserver-deployment-595b5b9587-j85cb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-j85cb webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-j85cb 411d2a4f-5728-43f4-8445-dafa17161c69 14553613 0 2020-05-08 22:23:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003ef6b07 0xc003ef6b08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.279: INFO: Pod "webserver-deployment-595b5b9587-kl2s2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kl2s2 webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-kl2s2 67a02bea-46e6-4a2b-ae8c-d5f41f18b93c 14553443 0 2020-05-08 22:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003ef6c57 0xc003ef6c58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.65,StartTime:2020-05-08 22:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 22:23:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fd23e7078682b97f1d8dc6ecafaed68ad75488b9c6255fc983525974800f1506,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.279: INFO: Pod "webserver-deployment-595b5b9587-nhkbx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nhkbx webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-nhkbx 48a80883-bbdf-4428-8d55-808c060915e1 14553503 0 2020-05-08 22:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003ef6eb7 0xc003ef6eb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.68,StartTime:2020-05-08 22:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 22:23:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4f06233ed52124365ca18434c1fd88d74afc584b761e403da54571ef1589f1de,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.279: INFO: Pod "webserver-deployment-595b5b9587-qwhdf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qwhdf webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-qwhdf 11f3880b-3b06-4100-af72-4ca821500965 14553588 0 2020-05-08 22:23:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003ef70d7 0xc003ef70d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.279: INFO: Pod "webserver-deployment-595b5b9587-qwjgp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qwjgp webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-qwjgp a7f186ce-5ff6-4b2e-987d-a85c66ba3df0 14553608 0 2020-05-08 22:23:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003ef7297 0xc003ef7298}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.280: INFO: Pod "webserver-deployment-595b5b9587-rxjff" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rxjff webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-rxjff 1aae1583-4596-478b-b4ad-085bfcb0b8d9 14553426 0 2020-05-08 22:22:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003ef7407 0xc003ef7408}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.169,StartTime:2020-05-08 22:22:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 22:23:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2d3779caf3e26af6b50f5e08ce80f2378a61dd55f34359e5dde605a213981fc7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.169,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.280: INFO: Pod "webserver-deployment-595b5b9587-wpghp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wpghp webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-wpghp eb661935-f026-41f9-b726-0643e493a868 14553612 0 2020-05-08 22:23:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003ef75f7 0xc003ef75f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.280: INFO: Pod "webserver-deployment-595b5b9587-wpjvv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wpjvv webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-wpjvv c7450249-1527-4949-a29e-a72d3ec22699 14553623 0 2020-05-08 22:23:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003ef7757 0xc003ef7758}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-08 22:23:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.280: INFO: Pod "webserver-deployment-595b5b9587-zxwlt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zxwlt webserver-deployment-595b5b9587- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-595b5b9587-zxwlt 3d898cbf-307b-4a1d-b2be-9e2257c29080 14553605 0 2020-05-08 22:23:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b94dc85b-4f29-4e44-977a-1154c2c6a80f 0xc003ef7987 0xc003ef7988}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.280: INFO: Pod "webserver-deployment-c7997dcc8-7wph4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7wph4 webserver-deployment-c7997dcc8- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-c7997dcc8-7wph4 1d1b52b6-e526-4ce7-bf92-6fe602f54452 14553591 0 2020-05-08 22:23:09 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6bfe3e3-aeef-4b5f-bb79-43ea5c16fae8 0xc003ef7b17 0xc003ef7b18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.280: INFO: Pod "webserver-deployment-c7997dcc8-blr9q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-blr9q webserver-deployment-c7997dcc8- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-c7997dcc8-blr9q b90abe1f-030f-4e6f-8d33-65c1c5437812 14553616 0 2020-05-08 22:23:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6bfe3e3-aeef-4b5f-bb79-43ea5c16fae8 0xc003ef7c87 0xc003ef7c88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.281: INFO: Pod "webserver-deployment-c7997dcc8-d5tlj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d5tlj webserver-deployment-c7997dcc8- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-c7997dcc8-d5tlj 5fca227b-0178-4bcb-9ac1-f8a6b7499148 14553626 0 2020-05-08 22:23:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6bfe3e3-aeef-4b5f-bb79-43ea5c16fae8 0xc003ef7e67 0xc003ef7e68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.281: INFO: Pod "webserver-deployment-c7997dcc8-mwjs7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mwjs7 webserver-deployment-c7997dcc8- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-c7997dcc8-mwjs7 cc21da2d-172b-4116-85da-1cc2cd8971c6 14553532 0 2020-05-08 22:23:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6bfe3e3-aeef-4b5f-bb79-43ea5c16fae8 0xc003ed4077 0xc003ed4078}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-08 22:23:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.281: INFO: Pod "webserver-deployment-c7997dcc8-nc4jb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nc4jb webserver-deployment-c7997dcc8- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-c7997dcc8-nc4jb 631d72f1-2ffe-456c-ab0f-cfca1380a5f2 14553558 0 2020-05-08 22:23:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6bfe3e3-aeef-4b5f-bb79-43ea5c16fae8 0xc003ed42a7 0xc003ed42a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-08 22:23:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.282: INFO: Pod "webserver-deployment-c7997dcc8-nlkxg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nlkxg webserver-deployment-c7997dcc8- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-c7997dcc8-nlkxg c0838df5-bc4d-461d-b399-ed4eb1123b97 14553531 0 2020-05-08 22:23:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6bfe3e3-aeef-4b5f-bb79-43ea5c16fae8 0xc003ed4487 0xc003ed4488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-08 22:23:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.282: INFO: Pod "webserver-deployment-c7997dcc8-pr76f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pr76f webserver-deployment-c7997dcc8- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-c7997dcc8-pr76f 24cfd2e3-7788-4956-bf0d-f7dea978ef02 14553611 0 2020-05-08 22:23:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6bfe3e3-aeef-4b5f-bb79-43ea5c16fae8 0xc003ed46c7 0xc003ed46c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.282: INFO: Pod "webserver-deployment-c7997dcc8-qsbbc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qsbbc webserver-deployment-c7997dcc8- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-c7997dcc8-qsbbc 13463f23-f047-4450-8931-b48f40f57910 14553545 0 2020-05-08 22:23:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6bfe3e3-aeef-4b5f-bb79-43ea5c16fae8 0xc003ed4867 0xc003ed4868}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-08 22:23:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.282: INFO: Pod "webserver-deployment-c7997dcc8-rgh7w" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rgh7w webserver-deployment-c7997dcc8- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-c7997dcc8-rgh7w 6f64e0ed-1545-4625-a38e-35c91e472b98 14553618 0 2020-05-08 22:23:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6bfe3e3-aeef-4b5f-bb79-43ea5c16fae8 0xc003ed4ab7 0xc003ed4ab8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.282: INFO: Pod "webserver-deployment-c7997dcc8-tc5p6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tc5p6 webserver-deployment-c7997dcc8- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-c7997dcc8-tc5p6 c8fd8e83-de4c-441f-b9f4-caa14eacf7f8 14553617 0 2020-05-08 22:23:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6bfe3e3-aeef-4b5f-bb79-43ea5c16fae8 0xc003ed4c77 0xc003ed4c78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.282: INFO: Pod "webserver-deployment-c7997dcc8-tg5nw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tg5nw webserver-deployment-c7997dcc8- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-c7997dcc8-tg5nw c33c3939-1ceb-4f02-be4e-9ec8444c19a9 14553604 0 2020-05-08 22:23:09 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6bfe3e3-aeef-4b5f-bb79-43ea5c16fae8 0xc003ed4e37 0xc003ed4e38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.283: INFO: Pod "webserver-deployment-c7997dcc8-w9r7j" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w9r7j webserver-deployment-c7997dcc8- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-c7997dcc8-w9r7j 417f1165-9406-4686-912b-5767b1668c85 14553596 0 2020-05-08 22:23:09 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6bfe3e3-aeef-4b5f-bb79-43ea5c16fae8 0xc003ed4ff7 0xc003ed4ff8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 8 22:23:10.283: INFO: Pod "webserver-deployment-c7997dcc8-wg7gj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wg7gj webserver-deployment-c7997dcc8- deployment-8318 /api/v1/namespaces/deployment-8318/pods/webserver-deployment-c7997dcc8-wg7gj de896df4-bc3e-4ee9-9e4e-bfb527a83191 14553556 0 2020-05-08 22:23:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6bfe3e3-aeef-4b5f-bb79-43ea5c16fae8 0xc003ed51e7 0xc003ed51e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zc5pk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zc5pk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zc5pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 22:23:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-08 22:23:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:23:10.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8318" for this suite. • [SLOW TEST:15.416 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":256,"skipped":4193,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:23:10.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:23:10.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1748" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":257,"skipped":4197,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:23:10.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-305/configmap-test-08dc0611-2990-4471-8365-9d6afa847b03 STEP: Creating a pod to test consume configMaps May 8 22:23:11.114: INFO: Waiting up to 5m0s for pod "pod-configmaps-dae76ccc-2525-4823-9989-c646cfedf7ab" in namespace "configmap-305" to be "success or failure" May 8 22:23:11.310: INFO: Pod "pod-configmaps-dae76ccc-2525-4823-9989-c646cfedf7ab": Phase="Pending", Reason="", readiness=false. Elapsed: 195.247867ms May 8 22:23:14.098: INFO: Pod "pod-configmaps-dae76ccc-2525-4823-9989-c646cfedf7ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.983287958s May 8 22:23:16.373: INFO: Pod "pod-configmaps-dae76ccc-2525-4823-9989-c646cfedf7ab": Phase="Pending", Reason="", readiness=false. Elapsed: 5.258819251s May 8 22:23:18.727: INFO: Pod "pod-configmaps-dae76ccc-2525-4823-9989-c646cfedf7ab": Phase="Pending", Reason="", readiness=false. Elapsed: 7.61275472s May 8 22:23:21.146: INFO: Pod "pod-configmaps-dae76ccc-2525-4823-9989-c646cfedf7ab": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031337666s May 8 22:23:23.367: INFO: Pod "pod-configmaps-dae76ccc-2525-4823-9989-c646cfedf7ab": Phase="Pending", Reason="", readiness=false. Elapsed: 12.25272999s May 8 22:23:25.757: INFO: Pod "pod-configmaps-dae76ccc-2525-4823-9989-c646cfedf7ab": Phase="Pending", Reason="", readiness=false. Elapsed: 14.642282181s May 8 22:23:27.867: INFO: Pod "pod-configmaps-dae76ccc-2525-4823-9989-c646cfedf7ab": Phase="Pending", Reason="", readiness=false. Elapsed: 16.752738451s May 8 22:23:30.054: INFO: Pod "pod-configmaps-dae76ccc-2525-4823-9989-c646cfedf7ab": Phase="Running", Reason="", readiness=true. Elapsed: 18.939641846s May 8 22:23:32.090: INFO: Pod "pod-configmaps-dae76ccc-2525-4823-9989-c646cfedf7ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.97530723s STEP: Saw pod success May 8 22:23:32.090: INFO: Pod "pod-configmaps-dae76ccc-2525-4823-9989-c646cfedf7ab" satisfied condition "success or failure" May 8 22:23:32.374: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-dae76ccc-2525-4823-9989-c646cfedf7ab container env-test: STEP: delete the pod May 8 22:23:33.617: INFO: Waiting for pod pod-configmaps-dae76ccc-2525-4823-9989-c646cfedf7ab to disappear May 8 22:23:33.820: INFO: Pod pod-configmaps-dae76ccc-2525-4823-9989-c646cfedf7ab no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:23:33.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-305" for this suite. • [SLOW TEST:23.532 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4201,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:23:34.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:23:34.867: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:23:36.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1188" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":259,"skipped":4239,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:23:37.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:23:38.034: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:23:39.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5863" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":260,"skipped":4246,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:23:39.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:23:39.703: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-9f50a58a-7906-4292-839c-490fd1da1590" in namespace "security-context-test-9352" to be "success or failure" May 8 22:23:39.954: INFO: Pod "alpine-nnp-false-9f50a58a-7906-4292-839c-490fd1da1590": Phase="Pending", Reason="", readiness=false. Elapsed: 251.49273ms May 8 22:23:42.075: INFO: Pod "alpine-nnp-false-9f50a58a-7906-4292-839c-490fd1da1590": Phase="Pending", Reason="", readiness=false. Elapsed: 2.371946591s May 8 22:23:44.131: INFO: Pod "alpine-nnp-false-9f50a58a-7906-4292-839c-490fd1da1590": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.428706942s May 8 22:23:44.131: INFO: Pod "alpine-nnp-false-9f50a58a-7906-4292-839c-490fd1da1590" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:23:44.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9352" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4265,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:23:44.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 8 22:23:48.432: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:23:48.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2973" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4306,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:23:48.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-05cc3e2d-9979-44e1-8d3e-3bf0daa80d89 STEP: Creating a pod to test consume secrets May 8 22:23:48.660: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-710e8f32-9fa5-4b7f-82eb-7e8e53861bf0" in namespace "projected-3787" to be "success or failure" May 8 22:23:48.664: INFO: Pod "pod-projected-secrets-710e8f32-9fa5-4b7f-82eb-7e8e53861bf0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018163ms May 8 22:23:50.702: INFO: Pod "pod-projected-secrets-710e8f32-9fa5-4b7f-82eb-7e8e53861bf0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042780502s May 8 22:23:52.706: INFO: Pod "pod-projected-secrets-710e8f32-9fa5-4b7f-82eb-7e8e53861bf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046762086s STEP: Saw pod success May 8 22:23:52.706: INFO: Pod "pod-projected-secrets-710e8f32-9fa5-4b7f-82eb-7e8e53861bf0" satisfied condition "success or failure" May 8 22:23:52.709: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-710e8f32-9fa5-4b7f-82eb-7e8e53861bf0 container projected-secret-volume-test: STEP: delete the pod May 8 22:23:52.745: INFO: Waiting for pod pod-projected-secrets-710e8f32-9fa5-4b7f-82eb-7e8e53861bf0 to disappear May 8 22:23:52.767: INFO: Pod pod-projected-secrets-710e8f32-9fa5-4b7f-82eb-7e8e53861bf0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:23:52.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3787" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4324,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:23:52.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8445 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8445 STEP: creating replication controller externalsvc in namespace services-8445 I0508 22:23:53.109696 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8445, replica count: 2 I0508 22:23:56.160122 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 22:23:59.160361 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 8 22:23:59.224: INFO: Creating new exec pod May 8 22:24:03.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8445 execpodx7pcz -- /bin/sh -x -c nslookup clusterip-service' May 8 22:24:03.485: INFO: stderr: "I0508 22:24:03.374294 4523 log.go:172] (0xc000103550) (0xc0005ebae0) Create stream\nI0508 22:24:03.374350 4523 log.go:172] (0xc000103550) (0xc0005ebae0) Stream added, broadcasting: 1\nI0508 22:24:03.376578 4523 log.go:172] (0xc000103550) Reply frame received for 1\nI0508 22:24:03.376621 4523 log.go:172] (0xc000103550) (0xc0005ebcc0) Create stream\nI0508 22:24:03.376635 4523 log.go:172] (0xc000103550) (0xc0005ebcc0) Stream added, broadcasting: 3\nI0508 22:24:03.377601 4523 log.go:172] (0xc000103550) Reply frame received for 3\nI0508 22:24:03.377627 4523 log.go:172] (0xc000103550) (0xc0005ebd60) Create stream\nI0508 22:24:03.377634 4523 log.go:172] (0xc000103550) (0xc0005ebd60) Stream added, broadcasting: 5\nI0508 22:24:03.378387 4523 log.go:172] (0xc000103550) Reply frame received for 5\nI0508 22:24:03.469906 4523 log.go:172] (0xc000103550) Data frame received for 5\nI0508 22:24:03.469942 4523 log.go:172] (0xc0005ebd60) (5) Data frame handling\nI0508 22:24:03.469960 4523 log.go:172] (0xc0005ebd60) (5) Data frame sent\n+ nslookup clusterip-service\nI0508 22:24:03.477086 4523 log.go:172] (0xc000103550) Data frame received for 3\nI0508 22:24:03.477102 4523 log.go:172] (0xc0005ebcc0) (3) Data frame handling\nI0508 22:24:03.477244 4523 log.go:172] (0xc0005ebcc0) (3) Data frame sent\nI0508 22:24:03.478060 4523 log.go:172] (0xc000103550) Data frame received for 3\nI0508 22:24:03.478075 4523 log.go:172] (0xc0005ebcc0) (3) Data frame handling\nI0508 22:24:03.478089 4523 log.go:172] (0xc0005ebcc0) (3) Data frame sent\nI0508 22:24:03.478569 4523 log.go:172] (0xc000103550) Data frame received for 5\nI0508 22:24:03.478582 4523 log.go:172] (0xc0005ebd60) (5) Data frame handling\nI0508 22:24:03.478758 4523 log.go:172] (0xc000103550) Data frame received for 3\nI0508 22:24:03.478782 4523 log.go:172] (0xc0005ebcc0) (3) Data frame handling\nI0508 22:24:03.480516 4523 log.go:172] (0xc000103550) Data frame received for 1\nI0508 22:24:03.480529 4523 log.go:172] (0xc0005ebae0) (1) Data frame handling\nI0508 22:24:03.480547 4523 log.go:172] (0xc0005ebae0) (1) Data frame sent\nI0508 22:24:03.480564 4523 log.go:172] (0xc000103550) (0xc0005ebae0) Stream removed, broadcasting: 1\nI0508 22:24:03.480716 4523 log.go:172] (0xc000103550) Go away received\nI0508 22:24:03.480834 4523 log.go:172] (0xc000103550) (0xc0005ebae0) Stream removed, broadcasting: 1\nI0508 22:24:03.480851 4523 log.go:172] (0xc000103550) (0xc0005ebcc0) Stream removed, broadcasting: 3\nI0508 22:24:03.480872 4523 log.go:172] (0xc000103550) (0xc0005ebd60) Stream removed, broadcasting: 5\n" May 8 22:24:03.485: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8445.svc.cluster.local\tcanonical name = externalsvc.services-8445.svc.cluster.local.\nName:\texternalsvc.services-8445.svc.cluster.local\nAddress: 10.102.2.55\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8445, will wait for the garbage collector to delete the pods May 8 22:24:03.546: INFO: Deleting ReplicationController externalsvc took: 7.397605ms May 8 22:24:04.047: INFO: Terminating ReplicationController externalsvc pods took: 500.217198ms May 8 22:24:19.566: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:24:19.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8445" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:26.820 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":264,"skipped":4328,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:24:19.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ac0056e0-ff57-4bb8-a8cd-3d3a8ca62f3f STEP: Creating a pod to test consume secrets May 8 22:24:19.712: INFO: Waiting up to 5m0s for pod "pod-secrets-91a49a41-dadb-4136-ad92-a6a1191abb48" in namespace "secrets-8499" to be "success or failure" May 8 22:24:19.726: INFO: Pod "pod-secrets-91a49a41-dadb-4136-ad92-a6a1191abb48": Phase="Pending", Reason="", readiness=false. Elapsed: 14.331389ms May 8 22:24:21.730: INFO: Pod "pod-secrets-91a49a41-dadb-4136-ad92-a6a1191abb48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017910945s May 8 22:24:23.734: INFO: Pod "pod-secrets-91a49a41-dadb-4136-ad92-a6a1191abb48": Phase="Running", Reason="", readiness=true. Elapsed: 4.022238173s May 8 22:24:25.744: INFO: Pod "pod-secrets-91a49a41-dadb-4136-ad92-a6a1191abb48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032328252s STEP: Saw pod success May 8 22:24:25.744: INFO: Pod "pod-secrets-91a49a41-dadb-4136-ad92-a6a1191abb48" satisfied condition "success or failure" May 8 22:24:25.747: INFO: Trying to get logs from node jerma-worker pod pod-secrets-91a49a41-dadb-4136-ad92-a6a1191abb48 container secret-volume-test: STEP: delete the pod May 8 22:24:25.823: INFO: Waiting for pod pod-secrets-91a49a41-dadb-4136-ad92-a6a1191abb48 to disappear May 8 22:24:25.877: INFO: Pod pod-secrets-91a49a41-dadb-4136-ad92-a6a1191abb48 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:24:25.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8499" for this suite. • [SLOW TEST:6.289 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:24:25.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 8 22:24:25.969: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6583 /api/v1/namespaces/watch-6583/configmaps/e2e-watch-test-label-changed cf713259-5d86-4fe1-a0df-be633547cbe3 14554401 0 2020-05-08 22:24:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 8 22:24:25.969: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6583 /api/v1/namespaces/watch-6583/configmaps/e2e-watch-test-label-changed cf713259-5d86-4fe1-a0df-be633547cbe3 14554402 0 2020-05-08 22:24:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 8 22:24:25.970: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6583 /api/v1/namespaces/watch-6583/configmaps/e2e-watch-test-label-changed cf713259-5d86-4fe1-a0df-be633547cbe3 14554403 0 2020-05-08 22:24:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 8 22:24:36.028: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6583 /api/v1/namespaces/watch-6583/configmaps/e2e-watch-test-label-changed cf713259-5d86-4fe1-a0df-be633547cbe3 14554443 0 2020-05-08 22:24:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 8 22:24:36.028: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6583 /api/v1/namespaces/watch-6583/configmaps/e2e-watch-test-label-changed cf713259-5d86-4fe1-a0df-be633547cbe3 14554444 0 2020-05-08 22:24:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 8 22:24:36.028: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6583 /api/v1/namespaces/watch-6583/configmaps/e2e-watch-test-label-changed cf713259-5d86-4fe1-a0df-be633547cbe3 14554445 0 2020-05-08 22:24:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:24:36.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6583" for this suite. • [SLOW TEST:10.152 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":266,"skipped":4368,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:24:36.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 8 22:24:40.241: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:24:40.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4559" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4377,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:24:40.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:24:40.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 8 22:24:41.021: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-08T22:24:41Z generation:1 name:name1 resourceVersion:14554485 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2dba6de9-f5f6-4390-ad9f-c362216162ff] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 8 22:24:51.026: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-08T22:24:51Z generation:1 name:name2 resourceVersion:14554529 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:a0f38a6a-566c-470f-9a77-a11ff3ede56d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 8 22:25:01.032: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-08T22:24:41Z generation:2 name:name1 resourceVersion:14554559 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2dba6de9-f5f6-4390-ad9f-c362216162ff] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 8 22:25:11.038: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-08T22:24:51Z generation:2 name:name2 resourceVersion:14554589 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:a0f38a6a-566c-470f-9a77-a11ff3ede56d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 8 22:25:21.044: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-08T22:24:41Z generation:2 name:name1 resourceVersion:14554619 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2dba6de9-f5f6-4390-ad9f-c362216162ff] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 8 22:25:31.051: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-08T22:24:51Z generation:2 name:name2 resourceVersion:14554649 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:a0f38a6a-566c-470f-9a77-a11ff3ede56d] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:25:41.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-454" for this suite. • [SLOW TEST:61.256 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":268,"skipped":4386,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:25:41.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 22:25:41.635: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b7a01fe-9565-4561-800c-d41ba9a79ab6" in namespace "projected-4043" to be "success or failure" May 8 22:25:41.639: INFO: Pod "downwardapi-volume-0b7a01fe-9565-4561-800c-d41ba9a79ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.000275ms May 8 22:25:43.642: INFO: Pod "downwardapi-volume-0b7a01fe-9565-4561-800c-d41ba9a79ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007119367s May 8 22:25:45.645: INFO: Pod "downwardapi-volume-0b7a01fe-9565-4561-800c-d41ba9a79ab6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01015892s STEP: Saw pod success May 8 22:25:45.645: INFO: Pod "downwardapi-volume-0b7a01fe-9565-4561-800c-d41ba9a79ab6" satisfied condition "success or failure" May 8 22:25:45.647: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0b7a01fe-9565-4561-800c-d41ba9a79ab6 container client-container: STEP: delete the pod May 8 22:25:45.876: INFO: Waiting for pod downwardapi-volume-0b7a01fe-9565-4561-800c-d41ba9a79ab6 to disappear May 8 22:25:45.889: INFO: Pod downwardapi-volume-0b7a01fe-9565-4561-800c-d41ba9a79ab6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:25:45.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4043" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4393,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:25:45.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-680 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-680 STEP: creating replication controller externalsvc in namespace services-680 I0508 22:25:46.071352 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-680, replica count: 2 I0508 22:25:49.121692 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 22:25:52.121894 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 8 22:25:52.202: INFO: Creating new exec pod May 8 22:25:56.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-680 execpod7s4s8 -- /bin/sh -x -c nslookup nodeport-service' May 8 22:25:56.515: INFO: stderr: "I0508 22:25:56.398660 4545 log.go:172] (0xc000a8c000) (0xc0006efb80) Create stream\nI0508 22:25:56.398721 4545 log.go:172] (0xc000a8c000) (0xc0006efb80) Stream added, broadcasting: 1\nI0508 22:25:56.400846 4545 log.go:172] (0xc000a8c000) Reply frame received for 1\nI0508 22:25:56.400876 4545 log.go:172] (0xc000a8c000) (0xc0009ee000) Create stream\nI0508 22:25:56.400884 4545 log.go:172] (0xc000a8c000) (0xc0009ee000) Stream added, broadcasting: 3\nI0508 22:25:56.402008 4545 log.go:172] (0xc000a8c000) Reply frame received for 3\nI0508 22:25:56.402037 4545 log.go:172] (0xc000a8c000) (0xc0009ee0a0) Create stream\nI0508 22:25:56.402044 4545 log.go:172] (0xc000a8c000) (0xc0009ee0a0) Stream added, broadcasting: 5\nI0508 22:25:56.403171 4545 log.go:172] (0xc000a8c000) Reply frame received for 5\nI0508 22:25:56.500679 4545 log.go:172] (0xc000a8c000) Data frame received for 5\nI0508 22:25:56.500705 4545 log.go:172] (0xc0009ee0a0) (5) Data frame handling\nI0508 22:25:56.500721 4545 log.go:172] (0xc0009ee0a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0508 22:25:56.507092 4545 log.go:172] (0xc000a8c000) Data frame received for 3\nI0508 22:25:56.507113 4545 log.go:172] (0xc0009ee000) (3) Data frame handling\nI0508 22:25:56.507134 4545 log.go:172] (0xc0009ee000) (3) Data frame sent\nI0508 22:25:56.507846 4545 log.go:172] (0xc000a8c000) Data frame received for 3\nI0508 22:25:56.507859 4545 log.go:172] (0xc0009ee000) (3) Data frame handling\nI0508 22:25:56.507866 4545 log.go:172] (0xc0009ee000) (3) Data frame sent\nI0508 22:25:56.508278 4545 log.go:172] (0xc000a8c000) Data frame received for 5\nI0508 22:25:56.508302 4545 log.go:172] (0xc0009ee0a0) (5) Data frame handling\nI0508 22:25:56.508334 4545 log.go:172] (0xc000a8c000) Data frame received for 3\nI0508 22:25:56.508345 4545 log.go:172] (0xc0009ee000) (3) Data frame handling\nI0508 22:25:56.510214 4545 log.go:172] (0xc000a8c000) Data frame received for 1\nI0508 22:25:56.510234 4545 log.go:172] (0xc0006efb80) (1) Data frame handling\nI0508 22:25:56.510245 4545 log.go:172] (0xc0006efb80) (1) Data frame sent\nI0508 22:25:56.510258 4545 log.go:172] (0xc000a8c000) (0xc0006efb80) Stream removed, broadcasting: 1\nI0508 22:25:56.510306 4545 log.go:172] (0xc000a8c000) Go away received\nI0508 22:25:56.510594 4545 log.go:172] (0xc000a8c000) (0xc0006efb80) Stream removed, broadcasting: 1\nI0508 22:25:56.510613 4545 log.go:172] (0xc000a8c000) (0xc0009ee000) Stream removed, broadcasting: 3\nI0508 22:25:56.510624 4545 log.go:172] (0xc000a8c000) (0xc0009ee0a0) Stream removed, broadcasting: 5\n" May 8 22:25:56.515: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-680.svc.cluster.local\tcanonical name = externalsvc.services-680.svc.cluster.local.\nName:\texternalsvc.services-680.svc.cluster.local\nAddress: 10.103.214.108\n\n" STEP: deleting ReplicationController externalsvc in namespace services-680, will wait for the garbage collector to delete the pods May 8 22:25:56.576: INFO: Deleting ReplicationController externalsvc took: 6.708988ms May 8 22:25:56.876: INFO: Terminating ReplicationController externalsvc pods took: 300.566593ms May 8 22:26:09.346: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:26:09.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-680" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:23.529 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":270,"skipped":4396,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:26:09.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 8 22:26:09.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3202 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 8 22:26:12.771: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0508 22:26:12.703611 4565 log.go:172] (0xc0008220b0) (0xc000846140) Create stream\nI0508 22:26:12.703668 4565 log.go:172] (0xc0008220b0) (0xc000846140) Stream added, broadcasting: 1\nI0508 22:26:12.706271 4565 log.go:172] (0xc0008220b0) Reply frame received for 1\nI0508 22:26:12.706336 4565 log.go:172] (0xc0008220b0) (0xc0003ce0a0) Create stream\nI0508 22:26:12.706354 4565 log.go:172] (0xc0008220b0) (0xc0003ce0a0) Stream added, broadcasting: 3\nI0508 22:26:12.707475 4565 log.go:172] (0xc0008220b0) Reply frame received for 3\nI0508 22:26:12.707527 4565 log.go:172] (0xc0008220b0) (0xc000858000) Create stream\nI0508 22:26:12.707544 4565 log.go:172] (0xc0008220b0) (0xc000858000) Stream added, broadcasting: 5\nI0508 22:26:12.708497 4565 log.go:172] (0xc0008220b0) Reply frame received for 5\nI0508 22:26:12.708552 4565 log.go:172] (0xc0008220b0) (0xc0008461e0) Create stream\nI0508 22:26:12.708579 4565 log.go:172] (0xc0008220b0) (0xc0008461e0) Stream added, broadcasting: 7\nI0508 22:26:12.710057 4565 log.go:172] (0xc0008220b0) Reply frame received for 7\nI0508 22:26:12.710258 4565 log.go:172] (0xc0003ce0a0) (3) Writing data frame\nI0508 22:26:12.710379 4565 log.go:172] (0xc0003ce0a0) (3) Writing data frame\nI0508 22:26:12.711546 4565 log.go:172] (0xc0008220b0) Data frame received for 5\nI0508 22:26:12.711567 4565 log.go:172] (0xc000858000) (5) Data frame handling\nI0508 22:26:12.711580 4565 log.go:172] (0xc000858000) (5) Data frame sent\nI0508 22:26:12.712297 4565 log.go:172] (0xc0008220b0) Data frame received for 5\nI0508 22:26:12.712333 4565 log.go:172] (0xc000858000) (5) Data frame handling\nI0508 22:26:12.712371 4565 log.go:172] (0xc000858000) (5) Data frame sent\nI0508 22:26:12.747552 4565 log.go:172] (0xc0008220b0) Data frame received for 5\nI0508 22:26:12.747609 4565 log.go:172] (0xc000858000) (5) Data frame handling\nI0508 22:26:12.747639 4565 log.go:172] (0xc0008220b0) Data frame received for 7\nI0508 22:26:12.747655 4565 log.go:172] (0xc0008461e0) (7) Data frame handling\nI0508 22:26:12.748126 4565 log.go:172] (0xc0008220b0) Data frame received for 1\nI0508 22:26:12.748169 4565 log.go:172] (0xc000846140) (1) Data frame handling\nI0508 22:26:12.748184 4565 log.go:172] (0xc000846140) (1) Data frame sent\nI0508 22:26:12.748202 4565 log.go:172] (0xc0008220b0) (0xc0003ce0a0) Stream removed, broadcasting: 3\nI0508 22:26:12.748230 4565 log.go:172] (0xc0008220b0) (0xc000846140) Stream removed, broadcasting: 1\nI0508 22:26:12.748247 4565 log.go:172] (0xc0008220b0) Go away received\nI0508 22:26:12.748797 4565 log.go:172] (0xc0008220b0) (0xc000846140) Stream removed, broadcasting: 1\nI0508 22:26:12.748820 4565 log.go:172] (0xc0008220b0) (0xc0003ce0a0) Stream removed, broadcasting: 3\nI0508 22:26:12.748831 4565 log.go:172] (0xc0008220b0) (0xc000858000) Stream removed, broadcasting: 5\nI0508 22:26:12.748856 4565 log.go:172] (0xc0008220b0) (0xc0008461e0) Stream removed, broadcasting: 7\n" May 8 22:26:12.771: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:26:14.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3202" for this suite. • [SLOW TEST:5.356 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":271,"skipped":4401,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:26:14.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 8 22:26:15.152: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0112feb7-ea0c-4e42-8d31-7d6196aa9ae5" in namespace "projected-3547" to be "success or failure" May 8 22:26:15.155: INFO: Pod "downwardapi-volume-0112feb7-ea0c-4e42-8d31-7d6196aa9ae5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.138022ms May 8 22:26:17.160: INFO: Pod "downwardapi-volume-0112feb7-ea0c-4e42-8d31-7d6196aa9ae5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007405907s May 8 22:26:19.163: INFO: Pod "downwardapi-volume-0112feb7-ea0c-4e42-8d31-7d6196aa9ae5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010856824s STEP: Saw pod success May 8 22:26:19.163: INFO: Pod "downwardapi-volume-0112feb7-ea0c-4e42-8d31-7d6196aa9ae5" satisfied condition "success or failure" May 8 22:26:19.166: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0112feb7-ea0c-4e42-8d31-7d6196aa9ae5 container client-container: STEP: delete the pod May 8 22:26:19.205: INFO: Waiting for pod downwardapi-volume-0112feb7-ea0c-4e42-8d31-7d6196aa9ae5 to disappear May 8 22:26:19.209: INFO: Pod downwardapi-volume-0112feb7-ea0c-4e42-8d31-7d6196aa9ae5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:26:19.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3547" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4410,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:26:19.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 8 22:26:19.303: INFO: Waiting up to 5m0s for pod "client-containers-f916edf2-d580-41fe-80ea-7a4e4e5cc286" in namespace "containers-3179" to be "success or failure" May 8 22:26:19.307: INFO: Pod "client-containers-f916edf2-d580-41fe-80ea-7a4e4e5cc286": Phase="Pending", Reason="", readiness=false. Elapsed: 3.35031ms May 8 22:26:21.311: INFO: Pod "client-containers-f916edf2-d580-41fe-80ea-7a4e4e5cc286": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007755078s May 8 22:26:23.315: INFO: Pod "client-containers-f916edf2-d580-41fe-80ea-7a4e4e5cc286": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011224364s STEP: Saw pod success May 8 22:26:23.315: INFO: Pod "client-containers-f916edf2-d580-41fe-80ea-7a4e4e5cc286" satisfied condition "success or failure" May 8 22:26:23.318: INFO: Trying to get logs from node jerma-worker2 pod client-containers-f916edf2-d580-41fe-80ea-7a4e4e5cc286 container test-container: STEP: delete the pod May 8 22:26:23.358: INFO: Waiting for pod client-containers-f916edf2-d580-41fe-80ea-7a4e4e5cc286 to disappear May 8 22:26:23.377: INFO: Pod client-containers-f916edf2-d580-41fe-80ea-7a4e4e5cc286 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:26:23.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3179" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4417,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:26:23.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d80b1af8-775f-4780-b9ee-ea4f323b62cd STEP: Creating a pod to test consume configMaps May 8 22:26:23.523: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4ecd9804-8610-4f21-8d7b-552b58c76674" in namespace "projected-5437" to be "success or failure" May 8 22:26:23.527: INFO: Pod "pod-projected-configmaps-4ecd9804-8610-4f21-8d7b-552b58c76674": Phase="Pending", Reason="", readiness=false. Elapsed: 3.896247ms May 8 22:26:25.597: INFO: Pod "pod-projected-configmaps-4ecd9804-8610-4f21-8d7b-552b58c76674": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074110717s May 8 22:26:27.602: INFO: Pod "pod-projected-configmaps-4ecd9804-8610-4f21-8d7b-552b58c76674": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078839924s STEP: Saw pod success May 8 22:26:27.602: INFO: Pod "pod-projected-configmaps-4ecd9804-8610-4f21-8d7b-552b58c76674" satisfied condition "success or failure" May 8 22:26:27.605: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-4ecd9804-8610-4f21-8d7b-552b58c76674 container projected-configmap-volume-test: STEP: delete the pod May 8 22:26:27.643: INFO: Waiting for pod pod-projected-configmaps-4ecd9804-8610-4f21-8d7b-552b58c76674 to disappear May 8 22:26:27.699: INFO: Pod pod-projected-configmaps-4ecd9804-8610-4f21-8d7b-552b58c76674 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:26:27.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5437" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:26:27.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:26:32.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9700" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":275,"skipped":4493,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:26:32.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-d1d8046e-64e9-4321-9989-6be5d86067e7 STEP: Creating a pod to test consume secrets May 8 22:26:32.661: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c9e6b284-a62c-48d4-a780-45e402ff975b" in namespace "projected-8071" to be "success or failure" May 8 22:26:32.665: INFO: Pod "pod-projected-secrets-c9e6b284-a62c-48d4-a780-45e402ff975b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.705964ms May 8 22:26:34.669: INFO: Pod "pod-projected-secrets-c9e6b284-a62c-48d4-a780-45e402ff975b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007952789s May 8 22:26:36.674: INFO: Pod "pod-projected-secrets-c9e6b284-a62c-48d4-a780-45e402ff975b": Phase="Running", Reason="", readiness=true. Elapsed: 4.012266852s May 8 22:26:38.678: INFO: Pod "pod-projected-secrets-c9e6b284-a62c-48d4-a780-45e402ff975b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016310208s STEP: Saw pod success May 8 22:26:38.678: INFO: Pod "pod-projected-secrets-c9e6b284-a62c-48d4-a780-45e402ff975b" satisfied condition "success or failure" May 8 22:26:38.680: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-c9e6b284-a62c-48d4-a780-45e402ff975b container projected-secret-volume-test: STEP: delete the pod May 8 22:26:38.696: INFO: Waiting for pod pod-projected-secrets-c9e6b284-a62c-48d4-a780-45e402ff975b to disappear May 8 22:26:38.701: INFO: Pod pod-projected-secrets-c9e6b284-a62c-48d4-a780-45e402ff975b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:26:38.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8071" for this suite. • [SLOW TEST:6.146 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:26:38.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-qrt9 STEP: Creating a pod to test atomic-volume-subpath May 8 22:26:38.817: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-qrt9" in namespace "subpath-8701" to be "success or failure" May 8 22:26:38.835: INFO: Pod "pod-subpath-test-projected-qrt9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.628885ms May 8 22:26:40.840: INFO: Pod "pod-subpath-test-projected-qrt9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022190043s May 8 22:26:42.844: INFO: Pod "pod-subpath-test-projected-qrt9": Phase="Running", Reason="", readiness=true. Elapsed: 4.026596128s May 8 22:26:44.848: INFO: Pod "pod-subpath-test-projected-qrt9": Phase="Running", Reason="", readiness=true. Elapsed: 6.030048255s May 8 22:26:46.852: INFO: Pod "pod-subpath-test-projected-qrt9": Phase="Running", Reason="", readiness=true. Elapsed: 8.034431435s May 8 22:26:48.861: INFO: Pod "pod-subpath-test-projected-qrt9": Phase="Running", Reason="", readiness=true. Elapsed: 10.043387049s May 8 22:26:50.884: INFO: Pod "pod-subpath-test-projected-qrt9": Phase="Running", Reason="", readiness=true. Elapsed: 12.066916972s May 8 22:26:52.897: INFO: Pod "pod-subpath-test-projected-qrt9": Phase="Running", Reason="", readiness=true. Elapsed: 14.079484486s May 8 22:26:54.900: INFO: Pod "pod-subpath-test-projected-qrt9": Phase="Running", Reason="", readiness=true. Elapsed: 16.082453752s May 8 22:26:56.904: INFO: Pod "pod-subpath-test-projected-qrt9": Phase="Running", Reason="", readiness=true. Elapsed: 18.086239461s May 8 22:26:58.907: INFO: Pod "pod-subpath-test-projected-qrt9": Phase="Running", Reason="", readiness=true. Elapsed: 20.089848118s May 8 22:27:00.911: INFO: Pod "pod-subpath-test-projected-qrt9": Phase="Running", Reason="", readiness=true. Elapsed: 22.093966596s May 8 22:27:02.916: INFO: Pod "pod-subpath-test-projected-qrt9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.098218603s STEP: Saw pod success May 8 22:27:02.916: INFO: Pod "pod-subpath-test-projected-qrt9" satisfied condition "success or failure" May 8 22:27:02.919: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-qrt9 container test-container-subpath-projected-qrt9: STEP: delete the pod May 8 22:27:02.944: INFO: Waiting for pod pod-subpath-test-projected-qrt9 to disappear May 8 22:27:02.954: INFO: Pod pod-subpath-test-projected-qrt9 no longer exists STEP: Deleting pod pod-subpath-test-projected-qrt9 May 8 22:27:02.954: INFO: Deleting pod "pod-subpath-test-projected-qrt9" in namespace "subpath-8701" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:27:02.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8701" for this suite. • [SLOW TEST:24.286 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":277,"skipped":4532,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 8 22:27:02.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 8 22:27:03.091: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 8 22:27:09.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-915" for this suite. • [SLOW TEST:6.078 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":278,"skipped":4553,"failed":0} SSSSSSSSSSSMay 8 22:27:09.076: INFO: Running AfterSuite actions on all nodes May 8 22:27:09.076: INFO: Running AfterSuite actions on node 1 May 8 22:27:09.076: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4716.936 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS